Feb 19 12:46:28 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 12:46:28 crc restorecon[4682]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:28 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:29 crc restorecon[4682]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 19 12:46:29 crc restorecon[4682]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 19 12:46:30 crc kubenswrapper[4833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 12:46:30 crc kubenswrapper[4833]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 12:46:30 crc kubenswrapper[4833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 12:46:30 crc kubenswrapper[4833]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 12:46:30 crc kubenswrapper[4833]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 19 12:46:30 crc kubenswrapper[4833]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.049778 4833 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.057977 4833 feature_gate.go:330] unrecognized feature gate: Example Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058016 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058028 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058040 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058055 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058069 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058080 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058091 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058102 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058113 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058150 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058162 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058172 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058184 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058194 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058204 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058215 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058226 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058236 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058248 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058259 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058269 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058279 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058289 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058299 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058308 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058318 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058328 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058337 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058347 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058356 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058365 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058373 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058381 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058390 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058398 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058406 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058414 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058423 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058431 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058440 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058448 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058456 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058465 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058473 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058481 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058535 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058545 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058556 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058565 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058576 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058587 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058596 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058606 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058615 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058626 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058637 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058646 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058654 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058663 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058671 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058680 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058688 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058699 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058707 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058715 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058724 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058732 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058741 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058749 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.058757 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060662 4833 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060688 4833 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060721 4833 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060733 4833 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060745 4833 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060755 4833 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060768 4833 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060780 4833 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060790 4833 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060800 4833 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060810 4833 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060833 4833 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060843 4833 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060853 4833 flags.go:64] FLAG: --cgroup-root="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060862 4833 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060872 4833 flags.go:64] FLAG: --client-ca-file="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060881 4833 flags.go:64] FLAG: --cloud-config="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060891 4833 flags.go:64] FLAG: --cloud-provider="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060900 4833 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060917 4833 flags.go:64] FLAG: --cluster-domain="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060927 4833 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060937 4833 flags.go:64] FLAG: --config-dir="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060946 4833 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060957 4833 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060969 4833 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060979 4833 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.060990 4833 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061000 4833 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061009 4833 flags.go:64] FLAG: --contention-profiling="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061019 4833 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061029 4833 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061039 4833 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061049 4833 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061060 4833 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061070 4833 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061080 4833 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061089 4833 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061099 4833 flags.go:64] FLAG: --enable-server="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061109 4833 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061124 4833 flags.go:64] FLAG: --event-burst="100" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061134 4833 flags.go:64] FLAG: --event-qps="50" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061143 4833 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061153 4833 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061163 4833 flags.go:64] FLAG: --eviction-hard="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061174 4833 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061184 4833 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061193 4833 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061216 4833 flags.go:64] FLAG: --eviction-soft="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061226 4833 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061235 4833 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061245 4833 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061254 4833 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061263 4833 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061273 4833 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061282 4833 flags.go:64] FLAG: --feature-gates="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061294 4833 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061307 4833 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061317 4833 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061328 4833 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061337 4833 flags.go:64] FLAG: --healthz-port="10248" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061347 4833 flags.go:64] FLAG: --help="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061356 4833 flags.go:64] FLAG: --hostname-override="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061366 4833 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061376 4833 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061386 4833 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061395 4833 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061404 4833 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061414 4833 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061424 4833 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061433 4833 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061443 4833 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061452 4833 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061462 4833 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061472 4833 flags.go:64] FLAG: --kube-reserved="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061482 4833 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061492 4833 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061530 4833 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061539 4833 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061549 4833 flags.go:64] FLAG: --lock-file="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061559 4833 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061569 4833 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061579 4833 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061593 4833 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061616 4833 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061626 4833 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061636 4833 flags.go:64] FLAG: --logging-format="text" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061645 4833 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061656 4833 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061666 4833 flags.go:64] FLAG: --manifest-url="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061676 4833 flags.go:64] FLAG: --manifest-url-header="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061688 4833 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061698 4833 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061709 4833 flags.go:64] FLAG: --max-pods="110" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061719 4833 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061729 4833 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061739 4833 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061748 4833 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061758 4833 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061768 4833 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061777 4833 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061799 4833 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061808 4833 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061818 4833 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061828 4833 flags.go:64] FLAG: --pod-cidr="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061837 4833 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061851 4833 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061861 4833 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061872 4833 flags.go:64] FLAG: --pods-per-core="0" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061883 4833 flags.go:64] FLAG: --port="10250" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061895 4833 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061907 4833 flags.go:64] FLAG: --provider-id="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061919 4833 flags.go:64] FLAG: --qos-reserved="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061930 4833 flags.go:64] FLAG: --read-only-port="10255" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061940 4833 flags.go:64] FLAG: --register-node="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061949 4833 flags.go:64] FLAG: --register-schedulable="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061959 4833 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061975 4833 flags.go:64] FLAG: --registry-burst="10" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061984 4833 flags.go:64] FLAG: --registry-qps="5" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.061994 4833 flags.go:64] FLAG: --reserved-cpus="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062004 4833 flags.go:64] FLAG: --reserved-memory="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062016 4833 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062026 4833 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062036 4833 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062065 4833 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062075 4833 flags.go:64] FLAG: --runonce="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062085 4833 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062095 4833 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062105 4833 flags.go:64] FLAG: --seccomp-default="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062114 4833 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062125 4833 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062135 4833 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062145 4833 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062154 4833 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062164 4833 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062174 4833 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062183 4833 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062193 4833 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062203 4833 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062212 4833 flags.go:64] FLAG: --system-cgroups="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062222 4833 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062237 4833 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062247 4833 flags.go:64] FLAG: --tls-cert-file="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062256 4833 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062274 4833 flags.go:64] FLAG: --tls-min-version="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062284 4833 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062293 4833 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062303 4833 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062313 4833 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062322 4833 flags.go:64] FLAG: --v="2" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062334 4833 flags.go:64] FLAG: --version="false" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062346 4833 flags.go:64] FLAG: --vmodule="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062357 4833 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.062367 4833 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062643 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062657 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062669 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062679 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062690 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062707 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062717 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062727 4833 feature_gate.go:330] unrecognized feature gate: Example Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062737 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062748 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062756 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062765 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062773 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062783 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062793 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062801 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062811 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062820 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062829 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062840 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062848 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062857 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062866 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062874 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062883 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062892 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062900 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062908 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062917 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062925 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062934 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062942 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062952 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062960 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062968 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062977 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062985 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.062999 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063018 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063028 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063038 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063047 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063055 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063064 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063073 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063081 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063090 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063099 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063107 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063116 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063125 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063136 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063147 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063156 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063165 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063174 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063183 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063191 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063200 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063208 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063216 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063225 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063234 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063243 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063251 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063260 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063271 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063281 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063289 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063301 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.063309 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.063324 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.075606 4833 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.075878 4833 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076024 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076045 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076097 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076108 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076121 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076130 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076140 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076148 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076157 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076166 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076175 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076183 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076192 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076200 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076209 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076217 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076225 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076233 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076282 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076294 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076302 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076311 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076320 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076328 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076336 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076345 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076354 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076362 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076370 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076377 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076386 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076394 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076401 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076409 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076416 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076424 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076433 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076440 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076450 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076463 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076472 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076482 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076491 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076529 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076539 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076550 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076560 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076570 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076582 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076591 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076600 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076610 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076619 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076627 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076635 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076643 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076651 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076658 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076666 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076674 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076681 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076689 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076697 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076705 4833 feature_gate.go:330] unrecognized feature gate: Example Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076712 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076720 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076728 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076736 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076743 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076751 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076758 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.076773 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.076993 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077006 4833 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077015 4833 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077023 4833 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077033 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077041 4833 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077050 4833 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077062 4833 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077070 4833 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077079 4833 feature_gate.go:330] unrecognized feature gate: Example Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077087 4833 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077095 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077103 4833 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077111 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077119 4833 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077127 4833 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077135 4833 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077143 4833 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077151 4833 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077159 4833 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077166 4833 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077174 4833 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077182 4833 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077190 4833 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077200 4833 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077211 4833 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077220 4833 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077228 4833 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077237 4833 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077245 4833 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077254 4833 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077263 4833 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077271 4833 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077279 4833 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077288 4833 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077296 4833 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077304 4833 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077311 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077319 4833 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077327 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077334 4833 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077343 4833 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077350 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077357 4833 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077365 4833 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077373 4833 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077381 4833 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077388 4833 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077399 4833 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077409 4833 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077417 4833 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077425 4833 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077434 4833 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077442 4833 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077450 4833 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077457 4833 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077465 4833 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077473 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077480 4833 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077488 4833 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077526 4833 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077537 4833 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077547 4833 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077556 4833 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077564 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077572 4833 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077582 4833 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077592 4833 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077601 4833 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077610 4833 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.077618 4833 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.077631 4833 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.077900 4833 server.go:940] "Client rotation is on, will bootstrap in background" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.085863 4833 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.086024 4833 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.087911 4833 server.go:997] "Starting client certificate rotation" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.087957 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.089087 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-10 05:01:05.999600031 +0000 UTC Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.089200 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.113768 4833 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.116871 4833 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.116883 4833 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.138366 4833 log.go:25] "Validated CRI v1 runtime API" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.171367 4833 log.go:25] "Validated CRI v1 image API" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.173893 4833 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.179428 4833 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-19-12-41-53-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.179473 4833 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.211120 4833 manager.go:217] Machine: {Timestamp:2026-02-19 12:46:30.206040799 +0000 UTC m=+0.601559627 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:dc14cf1a-5576-4d69-98fb-0c44d3f24b1f BootID:9bc9539f-520c-4440-a07d-375a239a8e0f Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ae:af:1f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ae:af:1f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1d:be:dd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d6:32:13 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8d:10:2e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:7d:13:a6 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:c2:2c:a4:6b:7a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4e:bf:d6:c0:b5:c0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.211552 4833 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.211942 4833 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.212571 4833 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.212853 4833 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.212901 4833 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.213223 4833 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.213242 4833 container_manager_linux.go:303] "Creating device plugin manager" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.214154 4833 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.214207 4833 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.215155 4833 state_mem.go:36] "Initialized new in-memory state store" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.215771 4833 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.221595 4833 kubelet.go:418] "Attempting to sync node with API server" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.221629 4833 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.221666 4833 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.221686 4833 kubelet.go:324] "Adding apiserver pod source" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.221703 4833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.227734 4833 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.227841 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.227845 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.227935 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.227945 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.228702 4833 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.230274 4833 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232077 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232107 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232116 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232125 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232140 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232148 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232157 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232171 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232182 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232192 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232205 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.232213 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.233154 4833 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.233734 4833 server.go:1280] "Started kubelet" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.233862 4833 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.234237 4833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.235023 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.235067 4833 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 12:46:30 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.237730 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.237777 4833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.238481 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 19:34:23.036144175 +0000 UTC Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.238647 4833 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.239557 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="200ms" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.239816 4833 server.go:460] "Adding debug handlers to kubelet server" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.240004 4833 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.240161 4833 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.240133 4833 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.241116 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.241238 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.241585 4833 factory.go:55] Registering systemd factory Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.246637 4833 factory.go:221] Registration of the systemd container factory successfully Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.247307 4833 factory.go:153] Registering CRI-O factory Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.247390 4833 factory.go:221] Registration of the crio container factory successfully Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.247625 4833 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.247663 4833 factory.go:103] Registering Raw factory Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.247691 4833 manager.go:1196] Started watching for new ooms in manager Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.248961 4833 manager.go:319] Starting recovery of all containers Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.248302 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1895a695ada3e852 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 12:46:30.233696338 +0000 UTC m=+0.629215116,LastTimestamp:2026-02-19 12:46:30.233696338 +0000 UTC m=+0.629215116,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.260060 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.260624 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.260804 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.261008 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.261137 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.261312 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.261472 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.261647 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.261812 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.261935 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.262051 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.262166 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.262404 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.262560 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.262702 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.262816 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.262946 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.263063 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.263190 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.263306 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.263420 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.263578 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.263701 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.263840 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.264040 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.264166 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.264311 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.264444 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.264890 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.265029 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.265152 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.265280 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.265397 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.265548 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.265688 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.265840 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.265959 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.266073 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.266186 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.268492 4833 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.268685 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.268807 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.268921 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.269080 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.269219 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.269335 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.269453 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.269629 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.269755 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.269879 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.270029 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.270147 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.270260 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.270382 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.270534 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.270681 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.270805 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.270921 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.271035 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.271149 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.271282 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.271401 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.271550 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.271673 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.271790 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.271922 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.272038 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.272149 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.272260 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.272431 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.272584 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.272727 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.272847 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.272959 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.273072 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.273184 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.273306 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.273424 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.273572 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.273694 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.273806 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.273941 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.274066 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.274191 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.274307 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.274421 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.274575 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.274712 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.274834 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.274949 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.275067 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.275193 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.275318 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.275432 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.275583 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.275704 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.275821 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.275962 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.276079 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.276190 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.276344 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.276475 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.276627 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.276761 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.276879 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.277010 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.277129 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.277244 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.277372 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.277490 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.277661 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.277783 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.277896 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.278018 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.278139 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.278254 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.278366 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.278479 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.278650 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.278768 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.278892 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.279006 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.279130 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.279254 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.279375 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.279520 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.279664 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.279781 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.279893 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.280015 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.280131 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.280247 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.280426 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.280764 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.280908 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.281025 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.281136 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.281247 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.281357 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.281468 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.281633 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.281761 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.281920 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.282033 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.282155 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.282288 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.282408 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.282554 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.282673 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.282785 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.282913 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.283027 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.283138 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.283247 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.283356 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.283478 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.283703 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.283824 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.283943 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284065 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284184 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284345 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284478 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284611 4833 manager.go:324] Recovery completed Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284629 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284733 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284763 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284782 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284801 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284820 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284839 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284858 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284879 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284898 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284916 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284933 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284951 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284968 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.284986 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285008 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285026 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285044 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285062 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285084 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285111 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285136 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285160 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285182 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285206 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285291 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285311 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285329 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285379 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285400 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285421 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285471 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285532 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285555 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285575 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285626 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285647 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285667 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285866 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285887 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285907 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285961 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.285980 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.286001 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.286056 4833 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.286077 4833 reconstruct.go:97] "Volume reconstruction finished" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.286096 4833 reconciler.go:26] "Reconciler: start to sync state" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.297395 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.299404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.299479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.299524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.301805 4833 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.301871 4833 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.301901 4833 state_mem.go:36] "Initialized new in-memory state store" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.310587 4833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.313562 4833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.313626 4833 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.313666 4833 kubelet.go:2335] "Starting kubelet main sync loop" Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.313742 4833 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.315252 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.315374 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.322843 4833 policy_none.go:49] "None policy: Start" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.323896 4833 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.323929 4833 state_mem.go:35] "Initializing new in-memory state store" Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.339267 4833 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.403445 4833 manager.go:334] "Starting Device Plugin manager" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.403558 4833 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.403579 4833 server.go:79] "Starting device plugin registration server" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.404197 4833 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.404223 4833 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.404931 4833 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.405118 4833 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.405138 4833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.414307 4833 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.414439 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.414873 4833 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.415887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.415940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.415956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.416144 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.416910 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.417009 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.419018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.419067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.419085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.419138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.419182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.419203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.419469 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.419708 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.419932 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.420718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.420764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.420776 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.421091 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.421266 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.421351 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.421765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.421790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.421802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.423489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.423581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.423660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.424012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.424048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.424063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.424027 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.424195 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.424271 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.425174 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.425199 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.425211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.425388 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.425429 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.425915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.425974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.425997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.426357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.426388 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.426401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.441116 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="400ms" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488549 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488594 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488621 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488643 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488663 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488687 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488770 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488815 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488838 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488863 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488888 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488910 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488930 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488959 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.488982 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.505072 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.506524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.506586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.506604 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.506640 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.507192 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590095 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590179 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590223 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590257 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590287 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590302 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590316 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590384 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590396 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590337 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590447 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590437 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590490 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590470 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590428 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590558 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590479 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590467 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590414 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590616 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590637 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590656 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590692 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590709 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590728 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590777 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590798 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590809 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590882 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.590980 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.707966 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.709675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.709748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.709774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.709822 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.710456 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.763759 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.789456 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.814955 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.817410 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-df34b498224928b39ed79dcddd5fe194aa5b2f737684e4c4757945a302e0414c WatchSource:0}: Error finding container df34b498224928b39ed79dcddd5fe194aa5b2f737684e4c4757945a302e0414c: Status 404 returned error can't find the container with id df34b498224928b39ed79dcddd5fe194aa5b2f737684e4c4757945a302e0414c Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.826550 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.829308 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-36761dbfea2129220acd023ee9c3de1604fe94074dfb6d87a289391791838f5f WatchSource:0}: Error finding container 36761dbfea2129220acd023ee9c3de1604fe94074dfb6d87a289391791838f5f: Status 404 returned error can't find the container with id 36761dbfea2129220acd023ee9c3de1604fe94074dfb6d87a289391791838f5f Feb 19 12:46:30 crc kubenswrapper[4833]: I0219 12:46:30.835839 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:30 crc kubenswrapper[4833]: E0219 12:46:30.841816 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="800ms" Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.849110 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-93c42285a11cfa56660d6e962a2ac9f98496a5569955737a7022345f17a40850 WatchSource:0}: Error finding container 93c42285a11cfa56660d6e962a2ac9f98496a5569955737a7022345f17a40850: Status 404 returned error can't find the container with id 93c42285a11cfa56660d6e962a2ac9f98496a5569955737a7022345f17a40850 Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.865545 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-8163690541ef6cbe1ebf37ca0546f609baf6d90dad76cfcda3289614b7e2f302 WatchSource:0}: Error finding container 8163690541ef6cbe1ebf37ca0546f609baf6d90dad76cfcda3289614b7e2f302: Status 404 returned error can't find the container with id 8163690541ef6cbe1ebf37ca0546f609baf6d90dad76cfcda3289614b7e2f302 Feb 19 12:46:30 crc kubenswrapper[4833]: W0219 12:46:30.869941 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d69f83b69ba4505a5407888d60fea6df4e72b88b5891a3ac06f57386c33eb25a WatchSource:0}: Error finding container d69f83b69ba4505a5407888d60fea6df4e72b88b5891a3ac06f57386c33eb25a: Status 404 returned error can't find the container with id d69f83b69ba4505a5407888d60fea6df4e72b88b5891a3ac06f57386c33eb25a Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.111446 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.113554 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.113596 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.113608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.113632 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 12:46:31 crc kubenswrapper[4833]: E0219 12:46:31.114041 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 19 12:46:31 crc kubenswrapper[4833]: W0219 12:46:31.179849 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:31 crc kubenswrapper[4833]: E0219 12:46:31.179957 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.236955 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.238994 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:51:00.217382877 +0000 UTC Feb 19 12:46:31 crc kubenswrapper[4833]: W0219 12:46:31.268253 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:31 crc kubenswrapper[4833]: E0219 12:46:31.268378 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.322049 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d69f83b69ba4505a5407888d60fea6df4e72b88b5891a3ac06f57386c33eb25a"} Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.323400 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8163690541ef6cbe1ebf37ca0546f609baf6d90dad76cfcda3289614b7e2f302"} Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.326101 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"93c42285a11cfa56660d6e962a2ac9f98496a5569955737a7022345f17a40850"} Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.327555 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"36761dbfea2129220acd023ee9c3de1604fe94074dfb6d87a289391791838f5f"} Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.328610 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"df34b498224928b39ed79dcddd5fe194aa5b2f737684e4c4757945a302e0414c"} Feb 19 12:46:31 crc kubenswrapper[4833]: W0219 12:46:31.362819 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:31 crc kubenswrapper[4833]: E0219 12:46:31.362918 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:31 crc kubenswrapper[4833]: W0219 12:46:31.468629 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:31 crc kubenswrapper[4833]: E0219 12:46:31.468729 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:31 crc kubenswrapper[4833]: E0219 12:46:31.643349 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="1.6s" Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.914665 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.916670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.916753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.916779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:31 crc kubenswrapper[4833]: I0219 12:46:31.916827 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 12:46:31 crc kubenswrapper[4833]: E0219 12:46:31.917540 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.174918 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 12:46:32 crc kubenswrapper[4833]: E0219 12:46:32.176268 4833 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.236705 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.239782 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 01:55:08.444430615 +0000 UTC Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.333485 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e" exitCode=0 Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.333707 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.333675 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e"} Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.335536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.335574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.335586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.336373 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5b7a69f689ddb71849508df8aa39a7bf0790e5d9f900c8bb11f7f73be7a24015" exitCode=0 Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.336461 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5b7a69f689ddb71849508df8aa39a7bf0790e5d9f900c8bb11f7f73be7a24015"} Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.336540 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.337369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.337403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.337414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.338706 4833 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8e4a10831f306ab5bef8b7c1651e2ef504b932c9101f8929440fd8a250fe3dd7" exitCode=0 Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.338762 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.338764 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8e4a10831f306ab5bef8b7c1651e2ef504b932c9101f8929440fd8a250fe3dd7"} Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.339813 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.339870 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.339892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.340867 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.342365 4833 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709" exitCode=0 Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.342426 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709"} Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.342586 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.342626 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.342646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.342629 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.343962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.343995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.344007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.346107 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d"} Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.346139 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5"} Feb 19 12:46:32 crc kubenswrapper[4833]: I0219 12:46:32.346151 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e"} Feb 19 12:46:33 crc kubenswrapper[4833]: W0219 12:46:33.122366 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:33 crc kubenswrapper[4833]: E0219 12:46:33.122610 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.236168 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.240306 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 02:08:12.334128847 +0000 UTC Feb 19 12:46:33 crc kubenswrapper[4833]: E0219 12:46:33.244002 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="3.2s" Feb 19 12:46:33 crc kubenswrapper[4833]: W0219 12:46:33.318230 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.222:6443: connect: connection refused Feb 19 12:46:33 crc kubenswrapper[4833]: E0219 12:46:33.318332 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.222:6443: connect: connection refused" logger="UnhandledError" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.351758 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ada98b9a43531c85c0d139ca191a5f919475ae8ec002eae5162eae6e2113d736"} Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.351817 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.352833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.352898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.352960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.356099 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fef38f10daeecfe4798979b037586cb7553da6ed81347ba4a1c2f1fa6671e269"} Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.356147 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6facdd3120ee01990d31602f404ba1bfdd78ebfe3de3e0208a1f1058bede3472"} Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.356168 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0fe0c469c1d92b8cc814628a16596c31a9dae61fdc7820423d94a8a25c622c02"} Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.356320 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.358799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.358865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.358887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.361190 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.361448 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8"} Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.362718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.362760 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.362771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.368806 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a"} Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.368872 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94"} Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.368895 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888"} Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.369069 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7"} Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.376686 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7a4038f82fcf597eae54af12eb702f1def15dcedbdf193f25f3ad16f08acd305" exitCode=0 Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.376780 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7a4038f82fcf597eae54af12eb702f1def15dcedbdf193f25f3ad16f08acd305"} Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.376874 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.380853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.380941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.380955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.518184 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.520109 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.520166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.520176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:33 crc kubenswrapper[4833]: I0219 12:46:33.520232 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 12:46:33 crc kubenswrapper[4833]: E0219 12:46:33.520886 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.222:6443: connect: connection refused" node="crc" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.241064 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:33:15.611684806 +0000 UTC Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.382674 4833 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0f797b4a433dae52d9ceaffbb68c44796fa2d560dc6d800d722360fc87cded59" exitCode=0 Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.382758 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0f797b4a433dae52d9ceaffbb68c44796fa2d560dc6d800d722360fc87cded59"} Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.382911 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.384214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.384256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.384272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.389851 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5"} Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.389933 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.389974 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.390051 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.390116 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.390081 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391742 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:34 crc kubenswrapper[4833]: I0219 12:46:34.391998 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.241380 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 05:59:06.213186169 +0000 UTC Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.398917 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eff934ea9f4b6ad9ee9ee7782ba45b5e1f9d3dabbb2e6833cf3284788505fdc5"} Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.398992 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8f9241774056d141bc3d57476d34b7e7e9ae12071470f9ffb9be7c8de3bbf4f6"} Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.399017 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"39acefb21e408c6460bb79a137d5ef70f8b2f115ae02343c3c381162f94dbbd4"} Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.399082 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.399127 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.399082 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.400733 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.400810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.400835 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.401148 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.401186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.401207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.533851 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.534145 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.535733 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.535802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:35 crc kubenswrapper[4833]: I0219 12:46:35.535818 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.241965 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:21:16.94471585 +0000 UTC Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.409656 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1b0c9208c7315fb14f64393ef380e4d8607621ce23d864d6c6e3b7b32bbdd18a"} Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.409725 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"268313c40337b7d066467e913fb7da7ebfa51c84a70ef636724227b58e1eb6d3"} Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.409786 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.409798 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.410986 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.411032 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.411042 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.411968 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.412002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.412011 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.509058 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.541325 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.721057 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.722976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.723040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.723052 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:36 crc kubenswrapper[4833]: I0219 12:46:36.723086 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.061151 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.205915 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.206240 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.208062 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.208126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.208145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.242599 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:11:05.499270094 +0000 UTC Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.413295 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.413445 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.415314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.415357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.415376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.415566 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.415658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.415678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:37 crc kubenswrapper[4833]: I0219 12:46:37.462082 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.243377 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:37:13.622802479 +0000 UTC Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.347743 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.415959 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.416045 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.417549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.417593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.417608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.418004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.418056 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.418074 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.534450 4833 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 12:46:38 crc kubenswrapper[4833]: I0219 12:46:38.534586 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 12:46:39 crc kubenswrapper[4833]: I0219 12:46:39.244591 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 06:05:13.874682228 +0000 UTC Feb 19 12:46:39 crc kubenswrapper[4833]: I0219 12:46:39.418356 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:39 crc kubenswrapper[4833]: I0219 12:46:39.419319 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:39 crc kubenswrapper[4833]: I0219 12:46:39.419372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:39 crc kubenswrapper[4833]: I0219 12:46:39.419392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:39 crc kubenswrapper[4833]: I0219 12:46:39.714979 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:39 crc kubenswrapper[4833]: I0219 12:46:39.715190 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:39 crc kubenswrapper[4833]: I0219 12:46:39.717024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:39 crc kubenswrapper[4833]: I0219 12:46:39.717115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:39 crc kubenswrapper[4833]: I0219 12:46:39.717134 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:40 crc kubenswrapper[4833]: I0219 12:46:40.245632 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:15:50.980956026 +0000 UTC Feb 19 12:46:40 crc kubenswrapper[4833]: E0219 12:46:40.415028 4833 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.233947 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.234299 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.236207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.236282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.236301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.242968 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.246004 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:43:18.069543035 +0000 UTC Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.426016 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.427552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.427627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.427645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:41 crc kubenswrapper[4833]: I0219 12:46:41.432970 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:42 crc kubenswrapper[4833]: I0219 12:46:42.246951 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:53:55.764881702 +0000 UTC Feb 19 12:46:42 crc kubenswrapper[4833]: I0219 12:46:42.430186 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:42 crc kubenswrapper[4833]: I0219 12:46:42.432363 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:42 crc kubenswrapper[4833]: I0219 12:46:42.432422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:42 crc kubenswrapper[4833]: I0219 12:46:42.432455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:43 crc kubenswrapper[4833]: I0219 12:46:43.247636 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:20:01.271805815 +0000 UTC Feb 19 12:46:43 crc kubenswrapper[4833]: W0219 12:46:43.782336 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 19 12:46:43 crc kubenswrapper[4833]: I0219 12:46:43.782468 4833 trace.go:236] Trace[1034723871]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 12:46:33.781) (total time: 10000ms): Feb 19 12:46:43 crc kubenswrapper[4833]: Trace[1034723871]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (12:46:43.782) Feb 19 12:46:43 crc kubenswrapper[4833]: Trace[1034723871]: [10.000915117s] [10.000915117s] END Feb 19 12:46:43 crc kubenswrapper[4833]: E0219 12:46:43.782519 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 12:46:44 crc kubenswrapper[4833]: I0219 12:46:44.237322 4833 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 12:46:44 crc kubenswrapper[4833]: I0219 12:46:44.248674 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:20:49.921748816 +0000 UTC Feb 19 12:46:44 crc kubenswrapper[4833]: W0219 12:46:44.292241 4833 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 19 12:46:44 crc kubenswrapper[4833]: I0219 12:46:44.292338 4833 trace.go:236] Trace[1543831046]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 12:46:34.290) (total time: 10001ms): Feb 19 12:46:44 crc kubenswrapper[4833]: Trace[1543831046]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:46:44.292) Feb 19 12:46:44 crc kubenswrapper[4833]: Trace[1543831046]: [10.001649105s] [10.001649105s] END Feb 19 12:46:44 crc kubenswrapper[4833]: E0219 12:46:44.292363 4833 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 19 12:46:44 crc kubenswrapper[4833]: I0219 12:46:44.416919 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 12:46:44 crc kubenswrapper[4833]: I0219 12:46:44.416991 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 12:46:44 crc kubenswrapper[4833]: I0219 12:46:44.421411 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 12:46:44 crc kubenswrapper[4833]: I0219 12:46:44.421474 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 12:46:45 crc kubenswrapper[4833]: I0219 12:46:45.249244 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 00:11:29.608078447 +0000 UTC Feb 19 12:46:46 crc kubenswrapper[4833]: I0219 12:46:46.250290 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:31:57.127379609 +0000 UTC Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.067285 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.067633 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.069765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.069857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.069880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.074303 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.251600 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:30:26.591144496 +0000 UTC Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.446173 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.447637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.447719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.447737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.506935 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.507277 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.509103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.509166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.509186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.524875 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 12:46:47 crc kubenswrapper[4833]: I0219 12:46:47.942048 4833 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 12:46:48 crc kubenswrapper[4833]: I0219 12:46:48.252411 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 01:26:29.990624126 +0000 UTC Feb 19 12:46:48 crc kubenswrapper[4833]: I0219 12:46:48.362187 4833 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 12:46:48 crc kubenswrapper[4833]: I0219 12:46:48.448716 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:48 crc kubenswrapper[4833]: I0219 12:46:48.450155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:48 crc kubenswrapper[4833]: I0219 12:46:48.450223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:48 crc kubenswrapper[4833]: I0219 12:46:48.450246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:48 crc kubenswrapper[4833]: I0219 12:46:48.535650 4833 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 12:46:48 crc kubenswrapper[4833]: I0219 12:46:48.535746 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.253215 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 22:50:23.642022362 +0000 UTC Feb 19 12:46:49 crc kubenswrapper[4833]: E0219 12:46:49.400467 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.407775 4833 trace.go:236] Trace[116053924]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 12:46:39.274) (total time: 10132ms): Feb 19 12:46:49 crc kubenswrapper[4833]: Trace[116053924]: ---"Objects listed" error: 10132ms (12:46:49.407) Feb 19 12:46:49 crc kubenswrapper[4833]: Trace[116053924]: [10.132852334s] [10.132852334s] END Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.408099 4833 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 12:46:49 crc kubenswrapper[4833]: E0219 12:46:49.409832 4833 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.410195 4833 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.410232 4833 trace.go:236] Trace[672967897]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 12:46:37.638) (total time: 11771ms): Feb 19 12:46:49 crc kubenswrapper[4833]: Trace[672967897]: ---"Objects listed" error: 11771ms (12:46:49.410) Feb 19 12:46:49 crc kubenswrapper[4833]: Trace[672967897]: [11.771615677s] [11.771615677s] END Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.410257 4833 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.430212 4833 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.450387 4833 csr.go:261] certificate signing request csr-7zjx7 is approved, waiting to be issued Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.463629 4833 csr.go:257] certificate signing request csr-7zjx7 is issued Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.501955 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59010->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.502040 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59010->192.168.126.11:17697: read: connection reset by peer" Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.502437 4833 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 12:46:49 crc kubenswrapper[4833]: I0219 12:46:49.502534 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.088296 4833 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 12:46:50 crc kubenswrapper[4833]: W0219 12:46:50.088720 4833 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.088856 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.222:57680->38.102.83.222:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1895a695d2937566 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 12:46:30.853375334 +0000 UTC m=+1.248894142,LastTimestamp:2026-02-19 12:46:30.853375334 +0000 UTC m=+1.248894142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 12:46:50 crc kubenswrapper[4833]: W0219 12:46:50.089318 4833 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.235126 4833 apiserver.go:52] "Watching apiserver" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.240978 4833 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.241436 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.241965 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.242198 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.242364 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.242447 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.242586 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.242685 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.242701 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.242881 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.242986 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.248065 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.248947 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.249433 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.249783 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.250370 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.250445 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.250716 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.252011 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.254096 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.254459 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 14:48:15.046153685 +0000 UTC Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.284890 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.306540 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.318255 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.341319 4833 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.341803 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.358892 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.387710 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.413284 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414438 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414482 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414524 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414548 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414571 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414592 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414612 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414634 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414653 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414673 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414698 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414717 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414737 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414757 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414780 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414799 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414821 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414843 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414863 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414886 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414906 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414933 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414964 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.414995 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415017 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415042 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415063 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415089 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415113 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415174 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415197 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415218 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415239 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415260 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415288 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415312 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415300 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415334 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415435 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415477 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415541 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415573 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415605 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415613 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415637 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415666 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415739 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415774 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415802 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415828 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415860 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415890 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415915 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415927 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415940 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415962 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.415995 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416028 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416100 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416136 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416064 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416205 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416268 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416292 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416307 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416343 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416364 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416388 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416415 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416437 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416460 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416482 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416521 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416540 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416559 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416580 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416602 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416620 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416640 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416665 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416683 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416702 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416725 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416745 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416764 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416787 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416807 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416826 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416856 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416881 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416901 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416922 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416966 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416988 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417008 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417028 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417050 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417078 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417104 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417137 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417163 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417184 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417213 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417240 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417260 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417279 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417298 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417317 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417341 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417362 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417389 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417444 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417464 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417487 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417523 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417543 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417563 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417612 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417640 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417659 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417677 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417695 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417714 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417733 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417752 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417770 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417789 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417807 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417826 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417849 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417869 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417891 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417908 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417928 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417986 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418005 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418024 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418046 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418067 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418088 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418109 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418129 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418147 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418167 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418188 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418220 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418244 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418269 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418296 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418322 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418340 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418360 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418377 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418432 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418450 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418468 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418521 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418543 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418579 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418599 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418615 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418635 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418651 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418681 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418704 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418721 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418740 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418758 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418776 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418799 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418823 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418857 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418876 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418995 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419031 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419050 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419069 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419089 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419107 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419125 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419144 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419164 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419184 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419206 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419224 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419241 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419259 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419278 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419294 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419315 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419332 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419352 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419387 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419408 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419427 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419456 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419478 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419774 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419801 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419821 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419840 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419861 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419880 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419901 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419921 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419974 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420005 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420031 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420056 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420082 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420103 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420124 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420147 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420167 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420186 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420206 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420228 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420247 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420269 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420410 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420436 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420454 4833 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420468 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420483 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420522 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420563 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420578 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420590 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420601 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416409 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416419 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416534 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416576 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416628 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416717 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416767 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416810 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416902 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.416958 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417067 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417081 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417118 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417266 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417309 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417327 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417449 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417563 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417563 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417694 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417800 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420982 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417829 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.417847 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418052 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418093 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418250 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418456 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.418923 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419142 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419428 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419731 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.419947 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.420440 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.421538 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.421931 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.422053 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.422264 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.422694 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.422780 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.423116 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.423312 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.423577 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.423859 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.424248 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.427429 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.427581 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.427642 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.427819 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.427968 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.428028 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.428375 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.429676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.430106 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.430829 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.431163 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.431412 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.431784 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.431789 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.432152 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.432449 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.432471 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.432512 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.432838 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.433069 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.433142 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.433279 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.433462 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.433595 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.433853 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.433998 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.434906 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.436627 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.436761 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.437004 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.437272 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.437302 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.437447 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.437570 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.437612 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.437967 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.438109 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.438285 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.438433 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.438486 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.438584 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.438800 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.439087 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.439348 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.439677 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.439884 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.439951 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.440198 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.440638 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.440753 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.440877 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.441022 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.441089 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.441222 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.441425 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.441617 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.441872 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.441871 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.441913 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.442196 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.442210 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.442208 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.442523 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.442554 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.442706 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.442890 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.443039 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.443238 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.443537 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.443732 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.443914 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.444101 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.444280 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.444368 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.444778 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.444898 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.445006 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.445254 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.445487 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.445753 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.445769 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.445875 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.446308 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.446378 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.446572 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.446604 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.446726 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.446832 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.446858 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.447088 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.447114 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.447325 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.447430 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.447518 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.447851 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.447943 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.448096 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.448319 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.448337 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.448414 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.448556 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.448629 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.448652 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:50.948614029 +0000 UTC m=+21.344132797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.448846 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.449160 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.449261 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.449347 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:50.949331647 +0000 UTC m=+21.344850415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.449615 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.449626 4833 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.449859 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.450173 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.450208 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.450402 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.451129 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.452422 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:46:50.952402053 +0000 UTC m=+21.347920831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.452808 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.453217 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.453399 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.453646 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.453654 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.453750 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.453975 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.454487 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.454741 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.455593 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.455773 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.458585 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.463227 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.463517 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.463001 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.464001 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.466747 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.470771 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.471075 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.471107 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.471124 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.471197 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:50.971174651 +0000 UTC m=+21.366693419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.471458 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.471548 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.471762 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.471795 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.471807 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:50 crc kubenswrapper[4833]: E0219 12:46:50.471844 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:50.971832988 +0000 UTC m=+21.367351756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.471912 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.472370 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.472430 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.472561 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.472707 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-19 12:41:49 +0000 UTC, rotation deadline is 2026-12-05 14:18:01.889694845 +0000 UTC Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.472748 4833 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6937h31m11.416949154s for next certificate rotation Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.473362 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.473586 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.474488 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.475204 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.475947 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.477098 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.477476 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.483989 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.484783 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.484840 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.485659 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.486372 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.486430 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.486759 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.488224 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.491928 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.492313 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.492821 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5" exitCode=255 Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.492854 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5"} Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.501797 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.511288 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.514117 4833 scope.go:117] "RemoveContainer" containerID="1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.516803 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.520864 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.520907 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.520992 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521003 4833 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521013 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521024 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521034 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521044 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521052 4833 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521038 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521060 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521121 4833 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521138 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521151 4833 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521004 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521163 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521231 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521252 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521262 4833 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521272 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521282 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521291 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521301 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521299 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521311 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521381 4833 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521395 4833 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521408 4833 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521420 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521431 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521444 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521456 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521468 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521478 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521488 4833 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521515 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521526 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521538 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521550 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521561 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521572 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521583 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521594 4833 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521605 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521615 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521625 4833 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521636 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521646 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521658 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521669 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.521681 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522816 4833 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522837 4833 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522848 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522861 4833 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522871 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522883 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522892 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522903 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522913 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522923 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522932 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522941 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522951 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522960 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522969 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522978 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522987 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.522996 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523034 4833 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523044 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523056 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523065 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523074 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523083 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523093 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523102 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523112 4833 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523122 4833 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523132 4833 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523140 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523150 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523158 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523168 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523177 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523187 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523195 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523204 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523212 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523222 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523231 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523241 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523251 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523260 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523269 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523278 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523286 4833 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523298 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523307 4833 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523317 4833 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523326 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523334 4833 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523343 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523352 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523360 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523369 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523377 4833 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523386 4833 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523394 4833 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523403 4833 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523411 4833 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523419 4833 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523429 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523439 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523447 4833 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523455 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523464 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523473 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523483 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523505 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523515 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523523 4833 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523532 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523540 4833 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523549 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523558 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523566 4833 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523577 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523586 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523595 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523604 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523612 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523621 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523629 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523637 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523646 4833 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523655 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523663 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523672 4833 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523680 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523689 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523697 4833 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523705 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523713 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523722 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523730 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523739 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523747 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523756 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523765 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523773 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523781 4833 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523790 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523797 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523805 4833 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523813 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523821 4833 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523830 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523839 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523847 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523856 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523866 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523874 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523882 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523891 4833 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523900 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523909 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523923 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523932 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523940 4833 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523949 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523958 4833 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523966 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523976 4833 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523984 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.523992 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524001 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524011 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524019 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524028 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524038 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524046 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524032 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524055 4833 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524165 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524177 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524190 4833 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524206 4833 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.524219 4833 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.535024 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.546605 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.561065 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.570768 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.572124 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.583570 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.584967 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.592744 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.595765 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.611636 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 12:46:50 crc kubenswrapper[4833]: W0219 12:46:50.617149 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-dfdf9b02cb55eb967b6d8d1eae8a78fdb5a0aa9a0b7b3e99619927114fd69cf0 WatchSource:0}: Error finding container dfdf9b02cb55eb967b6d8d1eae8a78fdb5a0aa9a0b7b3e99619927114fd69cf0: Status 404 returned error can't find the container with id dfdf9b02cb55eb967b6d8d1eae8a78fdb5a0aa9a0b7b3e99619927114fd69cf0 Feb 19 12:46:50 crc kubenswrapper[4833]: I0219 12:46:50.625373 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.032353 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.032423 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.032449 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.032468 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.032485 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032581 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:46:52.032548907 +0000 UTC m=+22.428067685 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032617 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032630 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032652 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032665 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032675 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:52.032660169 +0000 UTC m=+22.428178937 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032692 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032732 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032706 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:52.03269123 +0000 UTC m=+22.428210018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032766 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032774 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032783 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:52.032762232 +0000 UTC m=+22.428281020 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:46:51 crc kubenswrapper[4833]: E0219 12:46:51.032802 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:52.032795603 +0000 UTC m=+22.428314371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.255119 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:15:59.435947327 +0000 UTC Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.497545 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.499692 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7"} Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.500087 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.502240 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859"} Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.502277 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079"} Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.502290 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dfdf9b02cb55eb967b6d8d1eae8a78fdb5a0aa9a0b7b3e99619927114fd69cf0"} Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.503890 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"387f216129b079f1520cc445e98ad89dd2f8d9b50ccb05d841a03cf109665925"} Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.505253 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab"} Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.505282 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2fa4fdbbb2c3289eeaccc1fe3d946c1a7173658f249f6f7a5469543f33b4db44"} Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.533523 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.548684 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.571473 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.591168 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.613328 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.630676 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.652346 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.666870 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.681342 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.691534 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.702206 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.711811 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.723022 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.735763 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.737002 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-p542x"] Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.737360 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p542x" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.741104 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.741782 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.742918 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.757162 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.771933 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.780177 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.790602 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.800641 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.811235 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.820929 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.830664 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:51Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.839089 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a9544854-47a0-4750-b6a0-1f4a2bb1955a-hosts-file\") pod \"node-resolver-p542x\" (UID: \"a9544854-47a0-4750-b6a0-1f4a2bb1955a\") " pod="openshift-dns/node-resolver-p542x" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.839146 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67hx2\" (UniqueName: \"kubernetes.io/projected/a9544854-47a0-4750-b6a0-1f4a2bb1955a-kube-api-access-67hx2\") pod \"node-resolver-p542x\" (UID: \"a9544854-47a0-4750-b6a0-1f4a2bb1955a\") " pod="openshift-dns/node-resolver-p542x" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.940446 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67hx2\" (UniqueName: \"kubernetes.io/projected/a9544854-47a0-4750-b6a0-1f4a2bb1955a-kube-api-access-67hx2\") pod \"node-resolver-p542x\" (UID: \"a9544854-47a0-4750-b6a0-1f4a2bb1955a\") " pod="openshift-dns/node-resolver-p542x" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.940516 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a9544854-47a0-4750-b6a0-1f4a2bb1955a-hosts-file\") pod \"node-resolver-p542x\" (UID: \"a9544854-47a0-4750-b6a0-1f4a2bb1955a\") " pod="openshift-dns/node-resolver-p542x" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.940585 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a9544854-47a0-4750-b6a0-1f4a2bb1955a-hosts-file\") pod \"node-resolver-p542x\" (UID: \"a9544854-47a0-4750-b6a0-1f4a2bb1955a\") " pod="openshift-dns/node-resolver-p542x" Feb 19 12:46:51 crc kubenswrapper[4833]: I0219 12:46:51.962056 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67hx2\" (UniqueName: \"kubernetes.io/projected/a9544854-47a0-4750-b6a0-1f4a2bb1955a-kube-api-access-67hx2\") pod \"node-resolver-p542x\" (UID: \"a9544854-47a0-4750-b6a0-1f4a2bb1955a\") " pod="openshift-dns/node-resolver-p542x" Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.041847 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:46:54.041824588 +0000 UTC m=+24.437343346 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.041884 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.041938 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.041960 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.041980 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.041997 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042087 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042098 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042108 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042137 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:54.042130966 +0000 UTC m=+24.437649734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042392 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042416 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:54.042408363 +0000 UTC m=+24.437927131 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042461 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042471 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042478 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042516 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:54.042489635 +0000 UTC m=+24.438008393 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042542 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.042560 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:54.042554767 +0000 UTC m=+24.438073535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.047762 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-p542x" Feb 19 12:46:52 crc kubenswrapper[4833]: W0219 12:46:52.061029 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9544854_47a0_4750_b6a0_1f4a2bb1955a.slice/crio-c26121c13eabc2f42e9b222d7fc18d000ae5ca6c5e63ae71933c0d253f9b3961 WatchSource:0}: Error finding container c26121c13eabc2f42e9b222d7fc18d000ae5ca6c5e63ae71933c0d253f9b3961: Status 404 returned error can't find the container with id c26121c13eabc2f42e9b222d7fc18d000ae5ca6c5e63ae71933c0d253f9b3961 Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.117157 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-flbc2"] Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.118023 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.118855 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-c2lxp"] Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.119063 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.120362 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.121659 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.122328 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9p75n"] Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.122533 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.122752 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.122856 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.123213 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.123337 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.123465 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.123682 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.124242 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.124681 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.125459 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.125963 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.135721 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.161398 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.185398 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.225798 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.243850 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-socket-dir-parent\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.243890 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-system-cni-dir\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.243905 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-run-netns\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.243928 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmx9d\" (UniqueName: \"kubernetes.io/projected/407965f4-206f-457e-9a8b-90948a537d06-kube-api-access-gmx9d\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.243946 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-var-lib-cni-multus\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.243964 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-daemon-config\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.243978 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/407965f4-206f-457e-9a8b-90948a537d06-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.243992 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-conf-dir\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244007 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a396d626-cea2-42cf-84c5-943b0b85a92b-rootfs\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244020 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-var-lib-kubelet\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244033 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-os-release\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244048 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4e1957a0-ea7d-4831-ae8f-630a9529ece1-cni-binary-copy\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244061 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-hostroot\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-tuning-conf-dir\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244089 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-cnibin\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244101 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-run-multus-certs\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244115 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-etc-kubernetes\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244129 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a396d626-cea2-42cf-84c5-943b0b85a92b-mcd-auth-proxy-config\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244143 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-run-k8s-cni-cncf-io\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btskh\" (UniqueName: \"kubernetes.io/projected/4e1957a0-ea7d-4831-ae8f-630a9529ece1-kube-api-access-btskh\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244170 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-cni-dir\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244191 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-os-release\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244204 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/407965f4-206f-457e-9a8b-90948a537d06-cni-binary-copy\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244219 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-var-lib-cni-bin\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244234 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-cnibin\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244248 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a396d626-cea2-42cf-84c5-943b0b85a92b-proxy-tls\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244280 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-system-cni-dir\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.244293 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2vxj\" (UniqueName: \"kubernetes.io/projected/a396d626-cea2-42cf-84c5-943b0b85a92b-kube-api-access-m2vxj\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.245787 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.255571 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:16:06.433524445 +0000 UTC Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.260208 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.289073 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.305468 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.313826 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.313850 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.313912 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.313951 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.314143 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:46:52 crc kubenswrapper[4833]: E0219 12:46:52.314236 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.317736 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.318346 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.319907 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.320604 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.321670 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.321961 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.322282 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.322982 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.324116 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.324804 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.325882 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.326515 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.327820 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.328445 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.329092 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.330150 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.330741 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.331753 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.332202 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.332816 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.333905 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.334432 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.335689 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.336139 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.337208 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.337729 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.337815 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.338383 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.339587 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.340071 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.341075 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.341598 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.342514 4833 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.342654 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.344545 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.345301 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmx9d\" (UniqueName: \"kubernetes.io/projected/407965f4-206f-457e-9a8b-90948a537d06-kube-api-access-gmx9d\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.346775 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.347769 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.349465 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-var-lib-cni-multus\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.349558 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.349574 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-daemon-config\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.349618 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/407965f4-206f-457e-9a8b-90948a537d06-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.349658 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-conf-dir\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.349665 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-var-lib-cni-multus\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.349748 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a396d626-cea2-42cf-84c5-943b0b85a92b-rootfs\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.349684 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a396d626-cea2-42cf-84c5-943b0b85a92b-rootfs\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.349881 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-conf-dir\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.349930 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-var-lib-kubelet\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.349962 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-var-lib-kubelet\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350035 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-os-release\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350305 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-os-release\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350363 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4e1957a0-ea7d-4831-ae8f-630a9529ece1-cni-binary-copy\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350553 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/407965f4-206f-457e-9a8b-90948a537d06-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350634 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-hostroot\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350677 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-tuning-conf-dir\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350742 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-hostroot\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350778 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-cnibin\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350897 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-cnibin\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350940 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-run-multus-certs\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350970 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-run-multus-certs\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350995 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-etc-kubernetes\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351022 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-etc-kubernetes\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351050 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a396d626-cea2-42cf-84c5-943b0b85a92b-mcd-auth-proxy-config\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351205 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351271 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-tuning-conf-dir\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.350769 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-daemon-config\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351335 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-run-k8s-cni-cncf-io\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351372 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-run-k8s-cni-cncf-io\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351384 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btskh\" (UniqueName: \"kubernetes.io/projected/4e1957a0-ea7d-4831-ae8f-630a9529ece1-kube-api-access-btskh\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351405 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4e1957a0-ea7d-4831-ae8f-630a9529ece1-cni-binary-copy\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351415 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-cni-dir\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351471 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-cni-dir\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351485 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-os-release\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351533 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/407965f4-206f-457e-9a8b-90948a537d06-cni-binary-copy\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351559 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a396d626-cea2-42cf-84c5-943b0b85a92b-proxy-tls\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351588 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-var-lib-cni-bin\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351610 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-cnibin\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351621 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-os-release\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351651 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-var-lib-cni-bin\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351768 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-system-cni-dir\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351799 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2vxj\" (UniqueName: \"kubernetes.io/projected/a396d626-cea2-42cf-84c5-943b0b85a92b-kube-api-access-m2vxj\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351809 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-system-cni-dir\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351837 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-socket-dir-parent\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351880 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-system-cni-dir\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351907 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-run-netns\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.351964 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-host-run-netns\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.352169 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/407965f4-206f-457e-9a8b-90948a537d06-cnibin\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.352283 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-multus-socket-dir-parent\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.352312 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4e1957a0-ea7d-4831-ae8f-630a9529ece1-system-cni-dir\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.352664 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/407965f4-206f-457e-9a8b-90948a537d06-cni-binary-copy\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.352810 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.353763 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.355003 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.355584 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.356419 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.357486 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a396d626-cea2-42cf-84c5-943b0b85a92b-mcd-auth-proxy-config\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.357725 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.358533 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.358958 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a396d626-cea2-42cf-84c5-943b0b85a92b-proxy-tls\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.359245 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.359773 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.360339 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.362074 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.363209 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.363713 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.364234 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.366930 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.368165 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2vxj\" (UniqueName: \"kubernetes.io/projected/a396d626-cea2-42cf-84c5-943b0b85a92b-kube-api-access-m2vxj\") pod \"machine-config-daemon-c2lxp\" (UID: \"a396d626-cea2-42cf-84c5-943b0b85a92b\") " pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.368671 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.368715 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmx9d\" (UniqueName: \"kubernetes.io/projected/407965f4-206f-457e-9a8b-90948a537d06-kube-api-access-gmx9d\") pod \"multus-additional-cni-plugins-flbc2\" (UID: \"407965f4-206f-457e-9a8b-90948a537d06\") " pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.368896 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btskh\" (UniqueName: \"kubernetes.io/projected/4e1957a0-ea7d-4831-ae8f-630a9529ece1-kube-api-access-btskh\") pod \"multus-9p75n\" (UID: \"4e1957a0-ea7d-4831-ae8f-630a9529ece1\") " pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.369841 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.370330 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.373325 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.385732 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.397323 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.417572 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.433215 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-flbc2" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.434753 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.441677 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.450745 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9p75n" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.451412 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.465082 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.487205 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.498919 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.531789 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwqj9"] Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.532657 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p542x" event={"ID":"a9544854-47a0-4750-b6a0-1f4a2bb1955a","Type":"ContainerStarted","Data":"ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792"} Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.532690 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-p542x" event={"ID":"a9544854-47a0-4750-b6a0-1f4a2bb1955a","Type":"ContainerStarted","Data":"c26121c13eabc2f42e9b222d7fc18d000ae5ca6c5e63ae71933c0d253f9b3961"} Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.533013 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.537732 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.537814 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.537908 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.538106 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.538249 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.540530 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.540799 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.547692 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" event={"ID":"407965f4-206f-457e-9a8b-90948a537d06","Type":"ContainerStarted","Data":"d4dec42f160ccca04ee73cb4e527b8ed92a86b1f04cda0b07c3fa1e56a2838a6"} Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.551929 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p75n" event={"ID":"4e1957a0-ea7d-4831-ae8f-630a9529ece1","Type":"ContainerStarted","Data":"196a2b40440080534e8911256e5a63da7454a73973b551544cf0ddd5710e0b99"} Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.554053 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"21880bc2875a9298a9b64e6076dfa7f00a0d0e61fa259ee144d1791b4fb0edc2"} Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.558163 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.582631 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.596699 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.613538 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.629292 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.649114 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.656109 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-etc-openvswitch\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.656153 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-node-log\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.656168 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovn-node-metrics-cert\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.656189 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-openvswitch\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.656290 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-systemd\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.656333 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-slash\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.656354 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-bin\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.656695 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-netd\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.656736 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-env-overrides\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.656787 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.657146 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-ovn\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.657223 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-script-lib\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.657245 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-kubelet\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.657294 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-systemd-units\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.657324 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chksh\" (UniqueName: \"kubernetes.io/projected/6dafae6a-984e-4e99-90ca-76937bfcc3d6-kube-api-access-chksh\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.657572 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-netns\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.657631 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-var-lib-openvswitch\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.657662 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.657719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-log-socket\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.657744 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-config\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.667126 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.679847 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.695339 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.708742 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.719661 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.732078 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.744284 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.756975 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.758347 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-log-socket\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.758408 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-config\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.758430 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-etc-openvswitch\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.758456 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-node-log\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.758483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovn-node-metrics-cert\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.758725 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-openvswitch\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759261 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-systemd\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759293 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-slash\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759300 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-systemd\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759316 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-bin\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.758576 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-etc-openvswitch\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.758534 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-log-socket\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759379 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-netd\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759384 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-slash\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759400 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-bin\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759406 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-env-overrides\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.758770 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-openvswitch\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.758576 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-node-log\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759433 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-netd\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759439 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759386 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-config\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759464 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-ovn\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759518 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-script-lib\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759528 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-ovn-kubernetes\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759536 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-systemd-units\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759554 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chksh\" (UniqueName: \"kubernetes.io/projected/6dafae6a-984e-4e99-90ca-76937bfcc3d6-kube-api-access-chksh\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759577 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-systemd-units\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759582 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-kubelet\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759608 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-kubelet\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759624 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-netns\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759692 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-var-lib-openvswitch\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759739 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-var-lib-openvswitch\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759765 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-netns\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759790 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759799 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-env-overrides\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.759822 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-ovn\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.760101 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-script-lib\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.762706 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovn-node-metrics-cert\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.772340 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.776192 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chksh\" (UniqueName: \"kubernetes.io/projected/6dafae6a-984e-4e99-90ca-76937bfcc3d6-kube-api-access-chksh\") pod \"ovnkube-node-pwqj9\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.786742 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.802473 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.823284 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.837392 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.848363 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.851034 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.862471 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.880415 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:52 crc kubenswrapper[4833]: I0219 12:46:52.894743 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:52Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.256505 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 17:42:18.943534368 +0000 UTC Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.561282 4833 generic.go:334] "Generic (PLEG): container finished" podID="407965f4-206f-457e-9a8b-90948a537d06" containerID="ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e" exitCode=0 Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.561365 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" event={"ID":"407965f4-206f-457e-9a8b-90948a537d06","Type":"ContainerDied","Data":"ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e"} Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.564055 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p75n" event={"ID":"4e1957a0-ea7d-4831-ae8f-630a9529ece1","Type":"ContainerStarted","Data":"55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e"} Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.566480 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e"} Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.566559 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16"} Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.568857 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e" exitCode=0 Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.568916 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e"} Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.568940 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"6882d8b77b81dd3c529d2b3a949ec30af347fedcba9e479f8e718c01ba182186"} Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.576232 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196"} Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.577682 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.595580 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.608097 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.620083 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.638901 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.652276 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.662925 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.672951 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.689729 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.703007 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.718124 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.729064 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.740981 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.750762 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.763932 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.773256 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.784242 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.795637 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.809154 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.820907 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.831790 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.848816 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.864402 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:53 crc kubenswrapper[4833]: I0219 12:46:53.880368 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:53Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.071655 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.071842 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.071899 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:46:58.071865497 +0000 UTC m=+28.467384305 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.071974 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.072040 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072080 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.072101 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072110 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072112 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072168 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072204 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072297 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072333 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072349 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072172 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:58.072159795 +0000 UTC m=+28.467678573 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072408 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:58.07238976 +0000 UTC m=+28.467908568 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072432 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:58.072420301 +0000 UTC m=+28.467939109 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.072719 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 12:46:58.072621576 +0000 UTC m=+28.468140354 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.257339 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:31:18.578558875 +0000 UTC Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.313931 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.314097 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.313946 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.314238 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.314358 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:54 crc kubenswrapper[4833]: E0219 12:46:54.314598 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.584990 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f"} Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.585348 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8"} Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.585363 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947"} Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.585374 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b"} Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.585385 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d"} Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.585395 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918"} Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.588090 4833 generic.go:334] "Generic (PLEG): container finished" podID="407965f4-206f-457e-9a8b-90948a537d06" containerID="1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072" exitCode=0 Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.588136 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" event={"ID":"407965f4-206f-457e-9a8b-90948a537d06","Type":"ContainerDied","Data":"1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072"} Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.605273 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.620036 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.635547 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.653609 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.669563 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.684078 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.695451 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.710350 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.727796 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.743935 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.757145 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.766900 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-qhkjl"] Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.767273 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qhkjl" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.770038 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.770027 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.770106 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.770191 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.770520 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.779001 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cbfd8074-c921-4100-a633-232f33b775b3-serviceca\") pod \"node-ca-qhkjl\" (UID: \"cbfd8074-c921-4100-a633-232f33b775b3\") " pod="openshift-image-registry/node-ca-qhkjl" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.779033 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7hlg\" (UniqueName: \"kubernetes.io/projected/cbfd8074-c921-4100-a633-232f33b775b3-kube-api-access-n7hlg\") pod \"node-ca-qhkjl\" (UID: \"cbfd8074-c921-4100-a633-232f33b775b3\") " pod="openshift-image-registry/node-ca-qhkjl" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.779064 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbfd8074-c921-4100-a633-232f33b775b3-host\") pod \"node-ca-qhkjl\" (UID: \"cbfd8074-c921-4100-a633-232f33b775b3\") " pod="openshift-image-registry/node-ca-qhkjl" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.782319 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.796567 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.810608 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.826848 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.839761 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.853659 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.864338 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.879024 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.879827 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cbfd8074-c921-4100-a633-232f33b775b3-serviceca\") pod \"node-ca-qhkjl\" (UID: \"cbfd8074-c921-4100-a633-232f33b775b3\") " pod="openshift-image-registry/node-ca-qhkjl" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.879919 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7hlg\" (UniqueName: \"kubernetes.io/projected/cbfd8074-c921-4100-a633-232f33b775b3-kube-api-access-n7hlg\") pod \"node-ca-qhkjl\" (UID: \"cbfd8074-c921-4100-a633-232f33b775b3\") " pod="openshift-image-registry/node-ca-qhkjl" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.880027 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbfd8074-c921-4100-a633-232f33b775b3-host\") pod \"node-ca-qhkjl\" (UID: \"cbfd8074-c921-4100-a633-232f33b775b3\") " pod="openshift-image-registry/node-ca-qhkjl" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.880148 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cbfd8074-c921-4100-a633-232f33b775b3-host\") pod \"node-ca-qhkjl\" (UID: \"cbfd8074-c921-4100-a633-232f33b775b3\") " pod="openshift-image-registry/node-ca-qhkjl" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.881032 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cbfd8074-c921-4100-a633-232f33b775b3-serviceca\") pod \"node-ca-qhkjl\" (UID: \"cbfd8074-c921-4100-a633-232f33b775b3\") " pod="openshift-image-registry/node-ca-qhkjl" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.903271 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7hlg\" (UniqueName: \"kubernetes.io/projected/cbfd8074-c921-4100-a633-232f33b775b3-kube-api-access-n7hlg\") pod \"node-ca-qhkjl\" (UID: \"cbfd8074-c921-4100-a633-232f33b775b3\") " pod="openshift-image-registry/node-ca-qhkjl" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.904740 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.917833 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.930147 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.943418 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:54 crc kubenswrapper[4833]: I0219 12:46:54.967025 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:54Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.078979 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-qhkjl" Feb 19 12:46:55 crc kubenswrapper[4833]: W0219 12:46:55.100410 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbfd8074_c921_4100_a633_232f33b775b3.slice/crio-0ad7d8a98e308b30c7632fa3418fac7a69db0ed32c97659ef289adc095ef700d WatchSource:0}: Error finding container 0ad7d8a98e308b30c7632fa3418fac7a69db0ed32c97659ef289adc095ef700d: Status 404 returned error can't find the container with id 0ad7d8a98e308b30c7632fa3418fac7a69db0ed32c97659ef289adc095ef700d Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.259546 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 18:58:35.477026451 +0000 UTC Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.541327 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.551017 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.557648 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.564733 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.586662 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.596137 4833 generic.go:334] "Generic (PLEG): container finished" podID="407965f4-206f-457e-9a8b-90948a537d06" containerID="1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf" exitCode=0 Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.596243 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" event={"ID":"407965f4-206f-457e-9a8b-90948a537d06","Type":"ContainerDied","Data":"1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf"} Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.598846 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qhkjl" event={"ID":"cbfd8074-c921-4100-a633-232f33b775b3","Type":"ContainerStarted","Data":"e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2"} Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.598900 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-qhkjl" event={"ID":"cbfd8074-c921-4100-a633-232f33b775b3","Type":"ContainerStarted","Data":"0ad7d8a98e308b30c7632fa3418fac7a69db0ed32c97659ef289adc095ef700d"} Feb 19 12:46:55 crc kubenswrapper[4833]: E0219 12:46:55.606193 4833 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.610376 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.630034 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.666572 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.689264 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.717804 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.738709 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.760573 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.780143 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.794368 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.806650 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.809920 4833 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.813629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.813703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.813727 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.813906 4833 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.822023 4833 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.822572 4833 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.823876 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.823913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.823924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.823941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.823953 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:55Z","lastTransitionTime":"2026-02-19T12:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.828714 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.847816 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: E0219 12:46:55.849094 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.855255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.855558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.855660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.855745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.855819 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:55Z","lastTransitionTime":"2026-02-19T12:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.863222 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: E0219 12:46:55.878448 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.882820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.882860 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.882873 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.882890 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.882901 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:55Z","lastTransitionTime":"2026-02-19T12:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.889631 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: E0219 12:46:55.896752 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.900755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.900805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.900817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.900834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.900847 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:55Z","lastTransitionTime":"2026-02-19T12:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.905072 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: E0219 12:46:55.914251 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.918437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.918470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.918481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.918541 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.918557 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:55Z","lastTransitionTime":"2026-02-19T12:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.920010 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: E0219 12:46:55.932726 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: E0219 12:46:55.933112 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.934957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.934998 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.935010 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.935029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.935041 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:55Z","lastTransitionTime":"2026-02-19T12:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.938818 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.950366 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.965457 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.976415 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:55 crc kubenswrapper[4833]: I0219 12:46:55.987653 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.000936 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:55Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.013658 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.026786 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.036956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.036981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.036989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.037008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.037017 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:56Z","lastTransitionTime":"2026-02-19T12:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.039817 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.139767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.139799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.139808 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.139822 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.139831 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:56Z","lastTransitionTime":"2026-02-19T12:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.242891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.242952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.242974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.243003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.243027 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:56Z","lastTransitionTime":"2026-02-19T12:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.260561 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 03:06:21.387984924 +0000 UTC Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.314675 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.314710 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:46:56 crc kubenswrapper[4833]: E0219 12:46:56.314885 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:46:56 crc kubenswrapper[4833]: E0219 12:46:56.315025 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.315334 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:46:56 crc kubenswrapper[4833]: E0219 12:46:56.315583 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.346428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.346486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.346534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.346558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.346576 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:56Z","lastTransitionTime":"2026-02-19T12:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.449417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.449486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.449539 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.449574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.449592 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:56Z","lastTransitionTime":"2026-02-19T12:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.552583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.552665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.552688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.552718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.552741 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:56Z","lastTransitionTime":"2026-02-19T12:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.607118 4833 generic.go:334] "Generic (PLEG): container finished" podID="407965f4-206f-457e-9a8b-90948a537d06" containerID="54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15" exitCode=0 Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.607213 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" event={"ID":"407965f4-206f-457e-9a8b-90948a537d06","Type":"ContainerDied","Data":"54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15"} Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.630406 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.653306 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.655701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.655761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.655786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.655828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.655853 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:56Z","lastTransitionTime":"2026-02-19T12:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.672016 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.690974 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.717142 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.731558 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.743174 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.759061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.759117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.759581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.759652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.759687 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:56Z","lastTransitionTime":"2026-02-19T12:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.760024 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.774354 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.782073 4833 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.787643 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.800209 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.813652 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.828931 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.857351 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:56Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.863320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.863356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.863369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.863386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.863398 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:56Z","lastTransitionTime":"2026-02-19T12:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.908730 4833 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.965273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.965318 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.965333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.965352 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:56 crc kubenswrapper[4833]: I0219 12:46:56.965365 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:56Z","lastTransitionTime":"2026-02-19T12:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.068334 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.068376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.068386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.068404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.068417 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:57Z","lastTransitionTime":"2026-02-19T12:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.170942 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.171025 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.171045 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.171070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.171085 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:57Z","lastTransitionTime":"2026-02-19T12:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.261478 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 11:33:03.166056208 +0000 UTC Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.273732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.273778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.273790 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.273812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.273825 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:57Z","lastTransitionTime":"2026-02-19T12:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.377546 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.377606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.377623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.377646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.377663 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:57Z","lastTransitionTime":"2026-02-19T12:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.480631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.480692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.480709 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.480736 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.480754 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:57Z","lastTransitionTime":"2026-02-19T12:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.583670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.583721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.583737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.583760 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.583777 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:57Z","lastTransitionTime":"2026-02-19T12:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.619612 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.624001 4833 generic.go:334] "Generic (PLEG): container finished" podID="407965f4-206f-457e-9a8b-90948a537d06" containerID="d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b" exitCode=0 Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.624048 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" event={"ID":"407965f4-206f-457e-9a8b-90948a537d06","Type":"ContainerDied","Data":"d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.665695 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.686974 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.687806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.687871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.687895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.687926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.687950 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:57Z","lastTransitionTime":"2026-02-19T12:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.707403 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.729676 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.758976 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.774585 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.793357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.793481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.793548 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.793573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.793626 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:57Z","lastTransitionTime":"2026-02-19T12:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.796854 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.814198 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.828326 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.849223 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.867160 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.880741 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.896364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.896415 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.896432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.896456 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.896470 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:57Z","lastTransitionTime":"2026-02-19T12:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.899303 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.920156 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:57Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.999399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.999440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.999452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.999471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:57 crc kubenswrapper[4833]: I0219 12:46:57.999483 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:57Z","lastTransitionTime":"2026-02-19T12:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.102218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.102266 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.102277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.102295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.102309 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:58Z","lastTransitionTime":"2026-02-19T12:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.119891 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.120082 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120123 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:47:06.120094951 +0000 UTC m=+36.515613749 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.120168 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.120220 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120263 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.120270 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120288 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120307 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120364 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:06.120346577 +0000 UTC m=+36.515865375 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120376 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120429 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:06.120414659 +0000 UTC m=+36.515933457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120404 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120461 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120529 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120550 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120553 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:06.120532502 +0000 UTC m=+36.516051280 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.120709 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:06.120636084 +0000 UTC m=+36.516154872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.204862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.204928 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.204945 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.204970 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.204986 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:58Z","lastTransitionTime":"2026-02-19T12:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.262590 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 23:29:55.185025561 +0000 UTC Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.308026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.308092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.308110 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.308140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.308158 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:58Z","lastTransitionTime":"2026-02-19T12:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.314695 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.314781 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.314723 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.314959 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.315334 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:46:58 crc kubenswrapper[4833]: E0219 12:46:58.315173 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.411233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.411301 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.411325 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.411355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.411378 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:58Z","lastTransitionTime":"2026-02-19T12:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.515085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.515159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.515176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.515200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.515217 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:58Z","lastTransitionTime":"2026-02-19T12:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.618621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.618664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.618681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.618702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.618717 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:58Z","lastTransitionTime":"2026-02-19T12:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.632771 4833 generic.go:334] "Generic (PLEG): container finished" podID="407965f4-206f-457e-9a8b-90948a537d06" containerID="4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def" exitCode=0 Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.632836 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" event={"ID":"407965f4-206f-457e-9a8b-90948a537d06","Type":"ContainerDied","Data":"4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def"} Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.654234 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.675002 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.693580 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.708960 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.720982 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.721014 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.721024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.721038 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.721049 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:58Z","lastTransitionTime":"2026-02-19T12:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.723258 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.737866 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.750432 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.765608 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.778883 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.793954 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.810319 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.821454 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.822626 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.822653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.822684 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.822698 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.822706 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:58Z","lastTransitionTime":"2026-02-19T12:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.837942 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.859517 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:58Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.930401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.930436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.930448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.930464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:58 crc kubenswrapper[4833]: I0219 12:46:58.930475 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:58Z","lastTransitionTime":"2026-02-19T12:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.032798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.032828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.032836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.032849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.032858 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:59Z","lastTransitionTime":"2026-02-19T12:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.134902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.134962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.134985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.135012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.135033 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:59Z","lastTransitionTime":"2026-02-19T12:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.238386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.238451 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.238469 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.238536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.238555 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:59Z","lastTransitionTime":"2026-02-19T12:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.262747 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:50:24.616204036 +0000 UTC Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.341806 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.341864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.341883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.341906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.341924 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:59Z","lastTransitionTime":"2026-02-19T12:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.445052 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.445092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.445100 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.445114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.445126 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:59Z","lastTransitionTime":"2026-02-19T12:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.548667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.548730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.548747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.548773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.548790 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:59Z","lastTransitionTime":"2026-02-19T12:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.647647 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.647925 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.651559 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.651604 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.651620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.651642 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.651659 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:59Z","lastTransitionTime":"2026-02-19T12:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.655646 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" event={"ID":"407965f4-206f-457e-9a8b-90948a537d06","Type":"ContainerStarted","Data":"24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.671488 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.682705 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.694135 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.716596 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.737794 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.754383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.754440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.754458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.754482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.754530 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:59Z","lastTransitionTime":"2026-02-19T12:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.759766 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.778642 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.809380 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.830453 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.854857 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.858409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.858476 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.858536 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.858570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.858595 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:59Z","lastTransitionTime":"2026-02-19T12:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.873408 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.895734 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.917776 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.934595 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.949764 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.961022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.961133 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.961152 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.961213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.961231 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:46:59Z","lastTransitionTime":"2026-02-19T12:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.966454 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:46:59 crc kubenswrapper[4833]: I0219 12:46:59.983569 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:46:59Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.004770 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.026013 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.046381 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.064172 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.064235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.064252 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.064276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.064296 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:00Z","lastTransitionTime":"2026-02-19T12:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.067997 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.087112 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.106416 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.135222 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.157058 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.167605 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.167725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.167742 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.167766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.167786 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:00Z","lastTransitionTime":"2026-02-19T12:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.180263 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.233357 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.260406 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.263450 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 09:32:31.84353825 +0000 UTC Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.270110 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.270432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.270572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.270710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.270831 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:00Z","lastTransitionTime":"2026-02-19T12:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.280183 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.314541 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.314611 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.314618 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:00 crc kubenswrapper[4833]: E0219 12:47:00.314700 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:00 crc kubenswrapper[4833]: E0219 12:47:00.314764 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:00 crc kubenswrapper[4833]: E0219 12:47:00.314816 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.335787 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.354105 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.373864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.373915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.373931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.373951 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.373965 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:00Z","lastTransitionTime":"2026-02-19T12:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.382819 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.402255 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.429206 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.443915 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.457153 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.471205 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.475529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.475573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.475590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.475613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.475629 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:00Z","lastTransitionTime":"2026-02-19T12:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.482353 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.497775 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.512996 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.529045 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.542571 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.558060 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.578252 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.578320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.578338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.578359 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.578374 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:00Z","lastTransitionTime":"2026-02-19T12:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.659965 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.660858 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.681028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.681070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.681082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.681099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.681112 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:00Z","lastTransitionTime":"2026-02-19T12:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.693378 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.714744 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.733644 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.750133 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.762456 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.781683 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.783705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.783754 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.783771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.783793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.783810 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:00Z","lastTransitionTime":"2026-02-19T12:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.803680 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.832263 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.858818 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.877948 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.886253 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.886329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.886353 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.886376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.886393 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:00Z","lastTransitionTime":"2026-02-19T12:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.897560 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.917771 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.936238 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.958572 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.975980 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:00Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.989437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.989484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.989528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.989554 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:00 crc kubenswrapper[4833]: I0219 12:47:00.989571 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:00Z","lastTransitionTime":"2026-02-19T12:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.092840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.092902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.092926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.092955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.092976 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:01Z","lastTransitionTime":"2026-02-19T12:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.196147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.196214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.196232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.196255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.196272 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:01Z","lastTransitionTime":"2026-02-19T12:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.264454 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:12:35.312209955 +0000 UTC Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.298819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.298875 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.298896 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.298923 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.298941 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:01Z","lastTransitionTime":"2026-02-19T12:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.401204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.401272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.401292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.401318 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.401336 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:01Z","lastTransitionTime":"2026-02-19T12:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.504173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.504223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.504236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.504254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.504267 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:01Z","lastTransitionTime":"2026-02-19T12:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.606940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.606975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.606987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.607003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.607015 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:01Z","lastTransitionTime":"2026-02-19T12:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.662671 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.709792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.709852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.709872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.709895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.709912 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:01Z","lastTransitionTime":"2026-02-19T12:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.813345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.813400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.813417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.813441 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.813458 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:01Z","lastTransitionTime":"2026-02-19T12:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.916197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.916600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.916622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.916649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:01 crc kubenswrapper[4833]: I0219 12:47:01.916667 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:01Z","lastTransitionTime":"2026-02-19T12:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.020371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.020424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.020436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.020458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.020471 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:02Z","lastTransitionTime":"2026-02-19T12:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.123117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.123156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.123167 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.123182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.123195 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:02Z","lastTransitionTime":"2026-02-19T12:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.226580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.226628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.226640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.226658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.226673 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:02Z","lastTransitionTime":"2026-02-19T12:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.265216 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:00:32.943975366 +0000 UTC Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.314870 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.314957 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.315007 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:02 crc kubenswrapper[4833]: E0219 12:47:02.315211 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:02 crc kubenswrapper[4833]: E0219 12:47:02.315347 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:02 crc kubenswrapper[4833]: E0219 12:47:02.315466 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.329716 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.329748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.329760 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.329778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.329790 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:02Z","lastTransitionTime":"2026-02-19T12:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.433090 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.433162 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.433183 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.433213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.433235 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:02Z","lastTransitionTime":"2026-02-19T12:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.536947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.537006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.537030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.537059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.537083 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:02Z","lastTransitionTime":"2026-02-19T12:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.590025 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.615235 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.636428 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.640067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.640161 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.640183 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.640245 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.640270 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:02Z","lastTransitionTime":"2026-02-19T12:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.660380 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.669676 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/0.log" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.677255 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.677593 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0" exitCode=1 Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.680330 4833 scope.go:117] "RemoveContainer" containerID="09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.699654 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.722797 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.740546 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.746393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.746691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.746858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.747050 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.747208 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:02Z","lastTransitionTime":"2026-02-19T12:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.759748 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.778661 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.799416 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.824407 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.843692 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.851147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.851371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.852342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.852552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.852732 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:02Z","lastTransitionTime":"2026-02-19T12:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.868816 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.896408 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.908812 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.928565 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.956084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.956140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.956158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.956187 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.956206 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:02Z","lastTransitionTime":"2026-02-19T12:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.958943 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.975142 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:02 crc kubenswrapper[4833]: I0219 12:47:02.991160 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:02Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.005686 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.022545 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.043567 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.059170 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.059208 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.059223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.059241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.059253 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:03Z","lastTransitionTime":"2026-02-19T12:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.062988 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.080873 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.098733 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.130188 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"message\\\":\\\"l\\\\nI0219 12:47:02.038639 6097 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 12:47:02.038683 6097 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038706 6097 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:02.038722 6097 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038747 6097 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:02.038754 6097 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 12:47:02.038800 6097 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038816 6097 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 12:47:02.038857 6097 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038729 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:02.039108 6097 factory.go:656] Stopping watch factory\\\\nI0219 12:47:02.039128 6097 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:02.039156 6097 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.148419 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.161773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.161814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.161826 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.161844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.161856 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:03Z","lastTransitionTime":"2026-02-19T12:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.172064 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.189109 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.264891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.264957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.264975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.264999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.265021 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:03Z","lastTransitionTime":"2026-02-19T12:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.267020 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 22:53:03.580487107 +0000 UTC Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.356361 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.369228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.369285 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.369303 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.369325 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.369342 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:03Z","lastTransitionTime":"2026-02-19T12:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.471728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.471765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.471775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.471792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.471818 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:03Z","lastTransitionTime":"2026-02-19T12:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.574644 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.574688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.574699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.574715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.574741 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:03Z","lastTransitionTime":"2026-02-19T12:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.678054 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.678104 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.678120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.678138 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.678157 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:03Z","lastTransitionTime":"2026-02-19T12:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.688672 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/0.log" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.691826 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252"} Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.692230 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.706205 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.720235 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.736045 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.749549 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.766039 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.780817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.780870 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.780890 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.780915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.780934 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:03Z","lastTransitionTime":"2026-02-19T12:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.787342 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.803549 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.825538 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"message\\\":\\\"l\\\\nI0219 12:47:02.038639 6097 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 12:47:02.038683 6097 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038706 6097 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:02.038722 6097 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038747 6097 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:02.038754 6097 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 12:47:02.038800 6097 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038816 6097 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 12:47:02.038857 6097 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038729 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:02.039108 6097 factory.go:656] Stopping watch factory\\\\nI0219 12:47:02.039128 6097 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:02.039156 6097 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.847431 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.862183 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.878158 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.883195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.883232 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.883243 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.883260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.883271 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:03Z","lastTransitionTime":"2026-02-19T12:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.897013 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.916218 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.933749 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:03Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.985851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.985913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.985926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.985948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:03 crc kubenswrapper[4833]: I0219 12:47:03.985961 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:03Z","lastTransitionTime":"2026-02-19T12:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.088920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.088996 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.089015 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.089039 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.089056 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:04Z","lastTransitionTime":"2026-02-19T12:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.191950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.192012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.192031 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.192055 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.192076 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:04Z","lastTransitionTime":"2026-02-19T12:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.267243 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 17:52:28.173358465 +0000 UTC Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.294975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.295043 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.295061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.295086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.295103 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:04Z","lastTransitionTime":"2026-02-19T12:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.314669 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.314740 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.314877 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:04 crc kubenswrapper[4833]: E0219 12:47:04.314869 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:04 crc kubenswrapper[4833]: E0219 12:47:04.314993 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:04 crc kubenswrapper[4833]: E0219 12:47:04.315220 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.398583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.398655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.398679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.398713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.398733 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:04Z","lastTransitionTime":"2026-02-19T12:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.502426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.502540 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.502561 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.502590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.502609 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:04Z","lastTransitionTime":"2026-02-19T12:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.605105 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.605162 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.605179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.605202 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.605220 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:04Z","lastTransitionTime":"2026-02-19T12:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.699483 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/1.log" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.700353 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/0.log" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.704003 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252" exitCode=1 Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.704064 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252"} Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.704144 4833 scope.go:117] "RemoveContainer" containerID="09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.705014 4833 scope.go:117] "RemoveContainer" containerID="0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252" Feb 19 12:47:04 crc kubenswrapper[4833]: E0219 12:47:04.705364 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.708329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.708441 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.708551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.708575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.708630 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:04Z","lastTransitionTime":"2026-02-19T12:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.723811 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.751011 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.754063 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7"] Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.758522 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.761292 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.764159 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.778734 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.793862 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.798952 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98bf14d5-7b12-4a96-b73b-3c8467eda471-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.799055 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnfnp\" (UniqueName: \"kubernetes.io/projected/98bf14d5-7b12-4a96-b73b-3c8467eda471-kube-api-access-pnfnp\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.799185 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98bf14d5-7b12-4a96-b73b-3c8467eda471-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.799221 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98bf14d5-7b12-4a96-b73b-3c8467eda471-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.812169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.812236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.812254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.812280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.812301 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:04Z","lastTransitionTime":"2026-02-19T12:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.814743 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.834653 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.853891 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.884908 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"message\\\":\\\"l\\\\nI0219 12:47:02.038639 6097 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 12:47:02.038683 6097 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038706 6097 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:02.038722 6097 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038747 6097 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:02.038754 6097 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 12:47:02.038800 6097 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038816 6097 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 12:47:02.038857 6097 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038729 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:02.039108 6097 factory.go:656] Stopping watch factory\\\\nI0219 12:47:02.039128 6097 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:02.039156 6097 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.923957 6221 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924165 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:03.924232 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:03.924346 6221 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924417 6221 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924425 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:03.924516 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:03.924553 6221 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.924569 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:03.924590 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:03.925045 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.900093 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98bf14d5-7b12-4a96-b73b-3c8467eda471-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.900156 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98bf14d5-7b12-4a96-b73b-3c8467eda471-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.900225 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98bf14d5-7b12-4a96-b73b-3c8467eda471-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.900279 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnfnp\" (UniqueName: \"kubernetes.io/projected/98bf14d5-7b12-4a96-b73b-3c8467eda471-kube-api-access-pnfnp\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.902102 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98bf14d5-7b12-4a96-b73b-3c8467eda471-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.902280 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98bf14d5-7b12-4a96-b73b-3c8467eda471-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.907041 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.910726 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98bf14d5-7b12-4a96-b73b-3c8467eda471-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.916445 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.916590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.916618 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.916655 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.916677 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:04Z","lastTransitionTime":"2026-02-19T12:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.929158 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.932058 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnfnp\" (UniqueName: \"kubernetes.io/projected/98bf14d5-7b12-4a96-b73b-3c8467eda471-kube-api-access-pnfnp\") pod \"ovnkube-control-plane-749d76644c-6g2h7\" (UID: \"98bf14d5-7b12-4a96-b73b-3c8467eda471\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.951438 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.971389 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:04 crc kubenswrapper[4833]: I0219 12:47:04.994050 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:04Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.019363 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.020595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.020673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.020693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.020721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.020738 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:05Z","lastTransitionTime":"2026-02-19T12:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.040897 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.073665 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"message\\\":\\\"l\\\\nI0219 12:47:02.038639 6097 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 12:47:02.038683 6097 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038706 6097 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:02.038722 6097 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038747 6097 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:02.038754 6097 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 12:47:02.038800 6097 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038816 6097 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 12:47:02.038857 6097 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038729 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:02.039108 6097 factory.go:656] Stopping watch factory\\\\nI0219 12:47:02.039128 6097 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:02.039156 6097 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.923957 6221 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924165 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:03.924232 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:03.924346 6221 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924417 6221 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924425 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:03.924516 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:03.924553 6221 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.924569 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:03.924590 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:03.925045 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.081458 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.097769 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.119786 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.129807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.129878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.129891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.129913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.129949 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:05Z","lastTransitionTime":"2026-02-19T12:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.137237 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.155459 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.174028 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.196185 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.212622 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.230586 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.233357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.233390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.233399 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.233414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.233427 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:05Z","lastTransitionTime":"2026-02-19T12:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.251318 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.267924 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 07:08:11.734749504 +0000 UTC Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.269294 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.289145 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.310206 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.331488 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.338658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.338718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.338765 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.338791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.338810 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:05Z","lastTransitionTime":"2026-02-19T12:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.443128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.443331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.443355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.443383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.443402 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:05Z","lastTransitionTime":"2026-02-19T12:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.546733 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.546797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.546814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.546838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.546872 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:05Z","lastTransitionTime":"2026-02-19T12:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.649998 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.650037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.650048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.650067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.650079 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:05Z","lastTransitionTime":"2026-02-19T12:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.711744 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" event={"ID":"98bf14d5-7b12-4a96-b73b-3c8467eda471","Type":"ContainerStarted","Data":"acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.711802 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" event={"ID":"98bf14d5-7b12-4a96-b73b-3c8467eda471","Type":"ContainerStarted","Data":"fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.711817 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" event={"ID":"98bf14d5-7b12-4a96-b73b-3c8467eda471","Type":"ContainerStarted","Data":"1f7af4dccdf415a4d5dc3788eacea8d957183c494fd753713ee9c8ed5a5e4681"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.713788 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/1.log" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.717619 4833 scope.go:117] "RemoveContainer" containerID="0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252" Feb 19 12:47:05 crc kubenswrapper[4833]: E0219 12:47:05.717797 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.727144 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.737340 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.747552 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.751842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.751873 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.751885 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.751900 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.751913 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:05Z","lastTransitionTime":"2026-02-19T12:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.762246 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.792396 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09e616ee296f2e8f4d7bb19973078180182ad0a7033a95827b4398d82cb321d0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"message\\\":\\\"l\\\\nI0219 12:47:02.038639 6097 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0219 12:47:02.038683 6097 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038706 6097 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:02.038722 6097 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038747 6097 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:02.038754 6097 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0219 12:47:02.038800 6097 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038816 6097 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0219 12:47:02.038857 6097 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:02.038729 6097 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:02.039108 6097 factory.go:656] Stopping watch factory\\\\nI0219 12:47:02.039128 6097 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:02.039156 6097 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.923957 6221 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924165 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:03.924232 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:03.924346 6221 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924417 6221 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924425 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:03.924516 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:03.924553 6221 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.924569 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:03.924590 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:03.925045 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.805558 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.825353 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.837651 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.849247 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.854037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.854082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.854093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.854111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.854123 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:05Z","lastTransitionTime":"2026-02-19T12:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.860202 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.870665 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.886969 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.901770 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.915184 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.930664 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.945112 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.957427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.957657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.957752 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.957865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.958017 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:05Z","lastTransitionTime":"2026-02-19T12:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.957958 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.972385 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:05 crc kubenswrapper[4833]: I0219 12:47:05.986928 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:05Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.004225 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.019666 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.021128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.021195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.021214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.021256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.021275 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.043322 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.059272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.059364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.059382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.059805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.060041 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.070591 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.923957 6221 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924165 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:03.924232 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:03.924346 6221 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924417 6221 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924425 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:03.924516 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:03.924553 6221 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.924569 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:03.924590 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:03.925045 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.098176 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.098307 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.103021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.103069 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.103089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.103157 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.103175 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.120554 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.121555 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.125663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.125698 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.125711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.125730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.125741 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.139327 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.144717 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.148847 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.148881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.148892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.148908 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.148919 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.153848 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.163556 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.163727 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.165521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.165556 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.165566 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.165584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.165596 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.166677 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.176089 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.185273 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.200563 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.216454 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.216672 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:47:22.21663977 +0000 UTC m=+52.612158538 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.216734 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.216865 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.216901 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.216938 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.216974 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.217075 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:22.21705466 +0000 UTC m=+52.612573438 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.217103 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.217160 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.217184 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.217180 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.217123 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.217234 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.217247 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.217265 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:22.217237585 +0000 UTC m=+52.612756433 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.217289 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:22.217279986 +0000 UTC m=+52.612798754 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.217307 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:22.217299556 +0000 UTC m=+52.612818324 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.260556 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-clgkm"] Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.261366 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.261480 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.268288 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:46:02.532179288 +0000 UTC Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.268375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.268450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.268470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.268534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.268559 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.279829 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.303226 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.314444 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.314586 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.314444 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.314615 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.314834 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.314920 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.318066 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.318399 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nk85\" (UniqueName: \"kubernetes.io/projected/4177542e-89ba-436d-bc9d-e792f2da656c-kube-api-access-5nk85\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.321714 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.343316 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.366284 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.371335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.371384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.371398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.371419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.371432 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.382880 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.402788 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.419606 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nk85\" (UniqueName: \"kubernetes.io/projected/4177542e-89ba-436d-bc9d-e792f2da656c-kube-api-access-5nk85\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.419697 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.419901 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.419979 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs podName:4177542e-89ba-436d-bc9d-e792f2da656c nodeName:}" failed. No retries permitted until 2026-02-19 12:47:06.919962018 +0000 UTC m=+37.315480796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs") pod "network-metrics-daemon-clgkm" (UID: "4177542e-89ba-436d-bc9d-e792f2da656c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.422534 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.443085 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.448438 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nk85\" (UniqueName: \"kubernetes.io/projected/4177542e-89ba-436d-bc9d-e792f2da656c-kube-api-access-5nk85\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.463544 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.473898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.473955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.473972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.473997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.474015 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.488413 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.507726 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.522056 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.538730 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.565664 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.923957 6221 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924165 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:03.924232 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:03.924346 6221 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924417 6221 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924425 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:03.924516 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:03.924553 6221 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.924569 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:03.924590 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:03.925045 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.576740 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.576829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.576847 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.576904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.576946 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.584793 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:06Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.680479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.680568 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.680587 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.680611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.680628 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.784060 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.784108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.784125 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.784147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.784179 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.887396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.887447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.887464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.887486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.887532 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.925448 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.925662 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: E0219 12:47:06.926079 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs podName:4177542e-89ba-436d-bc9d-e792f2da656c nodeName:}" failed. No retries permitted until 2026-02-19 12:47:07.926051045 +0000 UTC m=+38.321569853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs") pod "network-metrics-daemon-clgkm" (UID: "4177542e-89ba-436d-bc9d-e792f2da656c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.991000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.991052 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.991073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.991099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:06 crc kubenswrapper[4833]: I0219 12:47:06.991119 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:06Z","lastTransitionTime":"2026-02-19T12:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.093441 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.093551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.093580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.093613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.093640 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:07Z","lastTransitionTime":"2026-02-19T12:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.195997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.196068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.196085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.196107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.196124 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:07Z","lastTransitionTime":"2026-02-19T12:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.268691 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 01:19:43.740191669 +0000 UTC Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.299375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.299440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.299462 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.299488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.299555 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:07Z","lastTransitionTime":"2026-02-19T12:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.403482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.403602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.403622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.403648 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.403670 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:07Z","lastTransitionTime":"2026-02-19T12:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.506402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.506489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.506585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.506643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.506661 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:07Z","lastTransitionTime":"2026-02-19T12:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.609770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.609850 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.609865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.609920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.609938 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:07Z","lastTransitionTime":"2026-02-19T12:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.713887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.713944 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.713959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.713987 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.714004 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:07Z","lastTransitionTime":"2026-02-19T12:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.818332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.818637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.818773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.818911 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.819073 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:07Z","lastTransitionTime":"2026-02-19T12:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.922193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.922252 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.922269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.922293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.922312 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:07Z","lastTransitionTime":"2026-02-19T12:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:07 crc kubenswrapper[4833]: I0219 12:47:07.964012 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:07 crc kubenswrapper[4833]: E0219 12:47:07.964235 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:07 crc kubenswrapper[4833]: E0219 12:47:07.964392 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs podName:4177542e-89ba-436d-bc9d-e792f2da656c nodeName:}" failed. No retries permitted until 2026-02-19 12:47:09.96436119 +0000 UTC m=+40.359879988 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs") pod "network-metrics-daemon-clgkm" (UID: "4177542e-89ba-436d-bc9d-e792f2da656c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.025634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.025709 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.025732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.025756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.025774 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:08Z","lastTransitionTime":"2026-02-19T12:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.128142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.128195 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.128214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.128235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.128251 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:08Z","lastTransitionTime":"2026-02-19T12:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.231365 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.231427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.231446 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.231471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.231489 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:08Z","lastTransitionTime":"2026-02-19T12:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.268906 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 01:33:37.794232323 +0000 UTC Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.314478 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.314629 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.314883 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:08 crc kubenswrapper[4833]: E0219 12:47:08.314803 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:08 crc kubenswrapper[4833]: E0219 12:47:08.315133 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:08 crc kubenswrapper[4833]: E0219 12:47:08.315301 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.315549 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:08 crc kubenswrapper[4833]: E0219 12:47:08.315886 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.334124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.334185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.334201 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.334224 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.334240 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:08Z","lastTransitionTime":"2026-02-19T12:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.437874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.438314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.438347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.438376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.438398 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:08Z","lastTransitionTime":"2026-02-19T12:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.542241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.542323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.542350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.542382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.542405 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:08Z","lastTransitionTime":"2026-02-19T12:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.645185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.645242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.645261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.645284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.645301 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:08Z","lastTransitionTime":"2026-02-19T12:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.747427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.747540 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.747564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.747591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.747612 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:08Z","lastTransitionTime":"2026-02-19T12:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.851279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.851353 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.851371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.851398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.851416 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:08Z","lastTransitionTime":"2026-02-19T12:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.954743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.954805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.954822 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.954846 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:08 crc kubenswrapper[4833]: I0219 12:47:08.954864 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:08Z","lastTransitionTime":"2026-02-19T12:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.059277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.059351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.059370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.059396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.059416 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:09Z","lastTransitionTime":"2026-02-19T12:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.162902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.162973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.162998 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.163023 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.163045 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:09Z","lastTransitionTime":"2026-02-19T12:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.265832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.265906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.265923 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.265947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.265964 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:09Z","lastTransitionTime":"2026-02-19T12:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.269317 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 02:44:31.288059837 +0000 UTC Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.368558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.368619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.368635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.368660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.368680 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:09Z","lastTransitionTime":"2026-02-19T12:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.473362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.473417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.473434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.473457 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.473550 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:09Z","lastTransitionTime":"2026-02-19T12:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.608361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.608414 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.608428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.608446 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.608461 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:09Z","lastTransitionTime":"2026-02-19T12:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.710339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.710385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.710398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.710415 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.710428 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:09Z","lastTransitionTime":"2026-02-19T12:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.812618 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.812696 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.812719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.812750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.812776 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:09Z","lastTransitionTime":"2026-02-19T12:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.916190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.916239 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.916256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.916286 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:09 crc kubenswrapper[4833]: I0219 12:47:09.916310 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:09Z","lastTransitionTime":"2026-02-19T12:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.010843 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:10 crc kubenswrapper[4833]: E0219 12:47:10.011122 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:10 crc kubenswrapper[4833]: E0219 12:47:10.011239 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs podName:4177542e-89ba-436d-bc9d-e792f2da656c nodeName:}" failed. No retries permitted until 2026-02-19 12:47:14.011212758 +0000 UTC m=+44.406731566 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs") pod "network-metrics-daemon-clgkm" (UID: "4177542e-89ba-436d-bc9d-e792f2da656c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.019151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.019191 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.019208 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.019229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.019245 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:10Z","lastTransitionTime":"2026-02-19T12:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.122370 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.122421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.122438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.122460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.122480 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:10Z","lastTransitionTime":"2026-02-19T12:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.235173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.235240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.235256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.235300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.235320 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:10Z","lastTransitionTime":"2026-02-19T12:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.270136 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 04:13:17.289507977 +0000 UTC Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.314475 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.314684 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.314808 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:10 crc kubenswrapper[4833]: E0219 12:47:10.315071 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.314434 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:10 crc kubenswrapper[4833]: E0219 12:47:10.315307 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:10 crc kubenswrapper[4833]: E0219 12:47:10.315478 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:10 crc kubenswrapper[4833]: E0219 12:47:10.315808 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.332882 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.337336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.337375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.337386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.337400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.337411 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:10Z","lastTransitionTime":"2026-02-19T12:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.348455 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.364806 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.384911 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.400872 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.416427 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.438675 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.442460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.442556 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.442576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.442601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.442619 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:10Z","lastTransitionTime":"2026-02-19T12:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.458292 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.475404 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.504118 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.923957 6221 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924165 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:03.924232 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:03.924346 6221 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924417 6221 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924425 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:03.924516 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:03.924553 6221 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.924569 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:03.924590 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:03.925045 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.524259 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.544811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.544872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.544889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.544914 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.544931 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:10Z","lastTransitionTime":"2026-02-19T12:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.547871 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.567691 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.591277 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.613379 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.633090 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:10Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.647713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.647769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.647784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.647805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.647821 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:10Z","lastTransitionTime":"2026-02-19T12:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.751040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.751099 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.751115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.751139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.751157 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:10Z","lastTransitionTime":"2026-02-19T12:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.853942 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.854015 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.854032 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.854056 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.854076 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:10Z","lastTransitionTime":"2026-02-19T12:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.956782 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.956843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.956861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.956884 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:10 crc kubenswrapper[4833]: I0219 12:47:10.956904 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:10Z","lastTransitionTime":"2026-02-19T12:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.059707 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.059787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.059809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.059837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.059889 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:11Z","lastTransitionTime":"2026-02-19T12:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.163634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.163702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.163720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.163744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.163763 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:11Z","lastTransitionTime":"2026-02-19T12:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.266668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.267017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.267203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.267384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.267585 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:11Z","lastTransitionTime":"2026-02-19T12:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.270889 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:58:25.597593275 +0000 UTC Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.371150 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.371203 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.371226 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.371258 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.371280 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:11Z","lastTransitionTime":"2026-02-19T12:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.474470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.474589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.474613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.474640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.474659 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:11Z","lastTransitionTime":"2026-02-19T12:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.577534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.577598 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.577615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.577639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.577661 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:11Z","lastTransitionTime":"2026-02-19T12:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.680921 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.681001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.681024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.681053 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.681074 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:11Z","lastTransitionTime":"2026-02-19T12:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.785658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.785727 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.785744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.785769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.785785 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:11Z","lastTransitionTime":"2026-02-19T12:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.889420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.889556 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.889585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.889683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.889715 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:11Z","lastTransitionTime":"2026-02-19T12:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.992874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.992940 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.992960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.992986 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:11 crc kubenswrapper[4833]: I0219 12:47:11.993003 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:11Z","lastTransitionTime":"2026-02-19T12:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.096403 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.096466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.096478 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.096517 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.096531 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:12Z","lastTransitionTime":"2026-02-19T12:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.199626 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.199682 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.199701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.199721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.199736 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:12Z","lastTransitionTime":"2026-02-19T12:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.271639 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 10:38:59.161462984 +0000 UTC Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.303371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.303443 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.303462 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.303487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.303534 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:12Z","lastTransitionTime":"2026-02-19T12:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.314757 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.314826 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.314925 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:12 crc kubenswrapper[4833]: E0219 12:47:12.314919 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:12 crc kubenswrapper[4833]: E0219 12:47:12.315076 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:12 crc kubenswrapper[4833]: E0219 12:47:12.315173 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.315305 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:12 crc kubenswrapper[4833]: E0219 12:47:12.315397 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.405919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.405981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.405997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.406021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.406040 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:12Z","lastTransitionTime":"2026-02-19T12:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.508852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.508926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.508949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.508982 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.509004 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:12Z","lastTransitionTime":"2026-02-19T12:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.611787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.611846 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.611864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.611887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.611903 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:12Z","lastTransitionTime":"2026-02-19T12:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.715043 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.715119 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.715141 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.715174 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.715196 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:12Z","lastTransitionTime":"2026-02-19T12:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.818111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.818193 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.818210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.818235 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.818251 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:12Z","lastTransitionTime":"2026-02-19T12:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.921246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.921320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.921345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.921377 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:12 crc kubenswrapper[4833]: I0219 12:47:12.921398 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:12Z","lastTransitionTime":"2026-02-19T12:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.024554 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.024625 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.024651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.024682 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.024706 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:13Z","lastTransitionTime":"2026-02-19T12:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.127726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.127794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.127820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.127842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.127860 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:13Z","lastTransitionTime":"2026-02-19T12:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.230779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.230869 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.230895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.230927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.230963 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:13Z","lastTransitionTime":"2026-02-19T12:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.272744 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:33:52.658057625 +0000 UTC Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.335015 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.335088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.335111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.335141 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.335164 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:13Z","lastTransitionTime":"2026-02-19T12:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.439059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.439111 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.439128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.439150 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.439167 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:13Z","lastTransitionTime":"2026-02-19T12:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.542322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.542386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.542409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.542440 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.542465 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:13Z","lastTransitionTime":"2026-02-19T12:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.646874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.646919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.646938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.646961 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.646983 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:13Z","lastTransitionTime":"2026-02-19T12:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.750430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.750491 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.750539 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.750563 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.750581 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:13Z","lastTransitionTime":"2026-02-19T12:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.853851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.853937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.853972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.854002 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.854022 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:13Z","lastTransitionTime":"2026-02-19T12:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.956576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.956651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.956673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.956699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:13 crc kubenswrapper[4833]: I0219 12:47:13.956716 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:13Z","lastTransitionTime":"2026-02-19T12:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.050005 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:14 crc kubenswrapper[4833]: E0219 12:47:14.050185 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:14 crc kubenswrapper[4833]: E0219 12:47:14.050302 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs podName:4177542e-89ba-436d-bc9d-e792f2da656c nodeName:}" failed. No retries permitted until 2026-02-19 12:47:22.050269492 +0000 UTC m=+52.445788300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs") pod "network-metrics-daemon-clgkm" (UID: "4177542e-89ba-436d-bc9d-e792f2da656c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.059619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.059689 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.059712 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.059741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.059769 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:14Z","lastTransitionTime":"2026-02-19T12:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.162883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.162954 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.162978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.163007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.163025 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:14Z","lastTransitionTime":"2026-02-19T12:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.266699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.266756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.266775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.266798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.266815 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:14Z","lastTransitionTime":"2026-02-19T12:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.273383 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 09:22:52.760469516 +0000 UTC Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.314977 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.315021 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.315241 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:14 crc kubenswrapper[4833]: E0219 12:47:14.315233 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.315272 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:14 crc kubenswrapper[4833]: E0219 12:47:14.315415 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:14 crc kubenswrapper[4833]: E0219 12:47:14.315591 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:14 crc kubenswrapper[4833]: E0219 12:47:14.315716 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.370206 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.370805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.371022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.371210 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.371394 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:14Z","lastTransitionTime":"2026-02-19T12:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.474850 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.474947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.474965 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.474992 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.475011 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:14Z","lastTransitionTime":"2026-02-19T12:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.578855 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.578911 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.578959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.578983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.579001 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:14Z","lastTransitionTime":"2026-02-19T12:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.681543 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.681613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.681636 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.681666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.681689 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:14Z","lastTransitionTime":"2026-02-19T12:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.784550 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.784610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.784629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.784653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.784671 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:14Z","lastTransitionTime":"2026-02-19T12:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.887287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.887361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.887385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.887413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.887430 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:14Z","lastTransitionTime":"2026-02-19T12:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.990710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.990791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.990817 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.990850 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:14 crc kubenswrapper[4833]: I0219 12:47:14.990873 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:14Z","lastTransitionTime":"2026-02-19T12:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.094453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.094830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.095022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.095249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.095413 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:15Z","lastTransitionTime":"2026-02-19T12:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.198887 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.198943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.199028 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.199052 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.199070 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:15Z","lastTransitionTime":"2026-02-19T12:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.273722 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 18:57:05.006675147 +0000 UTC Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.306485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.306588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.306608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.306633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.306650 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:15Z","lastTransitionTime":"2026-02-19T12:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.409837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.409906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.409923 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.409947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.409966 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:15Z","lastTransitionTime":"2026-02-19T12:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.512698 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.512782 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.512800 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.512823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.512839 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:15Z","lastTransitionTime":"2026-02-19T12:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.616425 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.616582 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.616610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.616640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.616664 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:15Z","lastTransitionTime":"2026-02-19T12:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.719785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.719898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.719924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.719949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.719965 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:15Z","lastTransitionTime":"2026-02-19T12:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.823435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.823488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.823533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.823557 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.823573 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:15Z","lastTransitionTime":"2026-02-19T12:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.926177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.926482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.926701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.926865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:15 crc kubenswrapper[4833]: I0219 12:47:15.926997 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:15Z","lastTransitionTime":"2026-02-19T12:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.030773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.030831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.030848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.030882 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.030903 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.134020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.134085 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.134104 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.134127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.134147 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.211037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.211101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.211121 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.211145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.211166 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: E0219 12:47:16.232185 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:16Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.237843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.238037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.238180 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.238322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.238477 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: E0219 12:47:16.258765 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:16Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.263780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.263834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.263852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.263882 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.263902 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.274818 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 06:18:15.3562709 +0000 UTC Feb 19 12:47:16 crc kubenswrapper[4833]: E0219 12:47:16.283451 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:16Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.288355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.288420 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.288437 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.288460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.288479 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: E0219 12:47:16.310245 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:16Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.314757 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:16 crc kubenswrapper[4833]: E0219 12:47:16.314920 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.315004 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:16 crc kubenswrapper[4833]: E0219 12:47:16.315095 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.315232 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:16 crc kubenswrapper[4833]: E0219 12:47:16.315318 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.315650 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:16 crc kubenswrapper[4833]: E0219 12:47:16.315960 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.316174 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.316280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.316304 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.316326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.316344 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: E0219 12:47:16.338836 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:16Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:16 crc kubenswrapper[4833]: E0219 12:47:16.339247 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.341767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.341981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.342196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.342429 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.342697 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.446759 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.447197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.447455 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.447728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.447970 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.551744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.551810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.551830 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.551858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.551881 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.654702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.654955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.655098 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.655240 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.655432 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.759146 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.759213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.759236 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.759263 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.759288 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.862179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.862607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.863014 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.869484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.869779 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.973359 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.973472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.973535 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.973570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:16 crc kubenswrapper[4833]: I0219 12:47:16.973591 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:16Z","lastTransitionTime":"2026-02-19T12:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.076101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.076142 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.076151 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.076166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.076176 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:17Z","lastTransitionTime":"2026-02-19T12:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.178152 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.178197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.178207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.178221 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.178230 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:17Z","lastTransitionTime":"2026-02-19T12:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.276018 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 12:26:57.983478201 +0000 UTC Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.281615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.281772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.281893 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.281999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.282106 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:17Z","lastTransitionTime":"2026-02-19T12:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.385033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.385197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.385217 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.385242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.385262 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:17Z","lastTransitionTime":"2026-02-19T12:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.488910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.489050 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.489075 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.489108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.489136 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:17Z","lastTransitionTime":"2026-02-19T12:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.592729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.592809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.592829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.592859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.592877 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:17Z","lastTransitionTime":"2026-02-19T12:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.696001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.696065 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.696089 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.696120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.696140 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:17Z","lastTransitionTime":"2026-02-19T12:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.798284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.798342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.798364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.798390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.798412 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:17Z","lastTransitionTime":"2026-02-19T12:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.901454 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.901580 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.901643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.901675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:17 crc kubenswrapper[4833]: I0219 12:47:17.901693 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:17Z","lastTransitionTime":"2026-02-19T12:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.004413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.004919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.005107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.005321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.005553 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:18Z","lastTransitionTime":"2026-02-19T12:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.108913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.108967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.108985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.109008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.109029 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:18Z","lastTransitionTime":"2026-02-19T12:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.211442 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.211804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.211983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.212115 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.212234 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:18Z","lastTransitionTime":"2026-02-19T12:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.276898 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:23:50.604231467 +0000 UTC Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.313996 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.314058 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:18 crc kubenswrapper[4833]: E0219 12:47:18.314160 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.314016 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:18 crc kubenswrapper[4833]: E0219 12:47:18.314588 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:18 crc kubenswrapper[4833]: E0219 12:47:18.314713 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.314259 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:18 crc kubenswrapper[4833]: E0219 12:47:18.314827 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.316291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.316340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.316357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.316378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.316394 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:18Z","lastTransitionTime":"2026-02-19T12:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.419187 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.419549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.420097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.420264 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.420399 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:18Z","lastTransitionTime":"2026-02-19T12:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.523666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.523767 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.523787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.523812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.523829 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:18Z","lastTransitionTime":"2026-02-19T12:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.626538 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.627056 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.627207 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.627351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.627572 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:18Z","lastTransitionTime":"2026-02-19T12:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.730797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.731140 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.731275 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.731413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.731618 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:18Z","lastTransitionTime":"2026-02-19T12:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.835064 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.835127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.835144 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.835171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.835194 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:18Z","lastTransitionTime":"2026-02-19T12:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.939997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.940473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.940686 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.940858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:18 crc kubenswrapper[4833]: I0219 12:47:18.941033 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:18Z","lastTransitionTime":"2026-02-19T12:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.044113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.044190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.044221 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.044251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.044273 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:19Z","lastTransitionTime":"2026-02-19T12:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.147458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.147572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.147591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.147614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.147631 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:19Z","lastTransitionTime":"2026-02-19T12:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.250593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.251031 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.251059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.251083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.251101 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:19Z","lastTransitionTime":"2026-02-19T12:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.277646 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 19:44:10.069271818 +0000 UTC Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.315449 4833 scope.go:117] "RemoveContainer" containerID="0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.353864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.353930 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.353951 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.353980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.354004 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:19Z","lastTransitionTime":"2026-02-19T12:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.456441 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.456487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.456515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.456534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.456547 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:19Z","lastTransitionTime":"2026-02-19T12:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.561759 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.561810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.561832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.561860 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.561883 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:19Z","lastTransitionTime":"2026-02-19T12:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.664545 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.664585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.664597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.664612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.664623 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:19Z","lastTransitionTime":"2026-02-19T12:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.769565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.769617 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.769635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.769658 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.769677 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:19Z","lastTransitionTime":"2026-02-19T12:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.774350 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/1.log" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.778290 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.778884 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.799130 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.822577 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.854489 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.871906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.871963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.871981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.872016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.872034 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:19Z","lastTransitionTime":"2026-02-19T12:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.876394 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.902189 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.914185 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.924461 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.938064 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.950586 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.969289 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.974116 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.974175 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.974194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.974218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.974236 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:19Z","lastTransitionTime":"2026-02-19T12:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:19 crc kubenswrapper[4833]: I0219 12:47:19.985067 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.001215 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:19Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.014107 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.031608 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.069596 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.923957 6221 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924165 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:03.924232 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:03.924346 6221 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924417 6221 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924425 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:03.924516 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:03.924553 6221 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.924569 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:03.924590 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:03.925045 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.077538 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.077591 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.077607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.077634 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.077665 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:20Z","lastTransitionTime":"2026-02-19T12:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.090723 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.180535 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.180583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.180600 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.180623 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.180639 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:20Z","lastTransitionTime":"2026-02-19T12:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.278101 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:52:05.994590098 +0000 UTC Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.283139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.283173 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.283185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.283201 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.283214 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:20Z","lastTransitionTime":"2026-02-19T12:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.314162 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.314292 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.314526 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.314748 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:20 crc kubenswrapper[4833]: E0219 12:47:20.314914 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:20 crc kubenswrapper[4833]: E0219 12:47:20.314941 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:20 crc kubenswrapper[4833]: E0219 12:47:20.314748 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:20 crc kubenswrapper[4833]: E0219 12:47:20.315240 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.331887 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.348829 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.371412 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.385770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.385936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.386027 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.386145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.386241 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:20Z","lastTransitionTime":"2026-02-19T12:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.391491 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.412772 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.431540 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.453534 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.466531 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.488581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.488639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.488657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.488681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.488699 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:20Z","lastTransitionTime":"2026-02-19T12:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.497849 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.923957 6221 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924165 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:03.924232 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:03.924346 6221 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924417 6221 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924425 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:03.924516 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:03.924553 6221 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.924569 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:03.924590 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:03.925045 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.507730 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.523294 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.537082 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.551647 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.565786 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.586324 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.591458 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.591549 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.591577 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.591608 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.591633 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:20Z","lastTransitionTime":"2026-02-19T12:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.602018 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.694323 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.694662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.694673 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.694690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.694703 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:20Z","lastTransitionTime":"2026-02-19T12:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.784832 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/2.log" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.786204 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/1.log" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.791008 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c" exitCode=1 Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.791260 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c"} Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.791471 4833 scope.go:117] "RemoveContainer" containerID="0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.792766 4833 scope.go:117] "RemoveContainer" containerID="f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c" Feb 19 12:47:20 crc kubenswrapper[4833]: E0219 12:47:20.793108 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.797454 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.797693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.797834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.797983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.798115 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:20Z","lastTransitionTime":"2026-02-19T12:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.811733 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.829892 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.852887 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.873312 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.888466 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.901129 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.901233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.901250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.901279 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.901297 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:20Z","lastTransitionTime":"2026-02-19T12:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.904159 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.923536 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.940740 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.971909 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b0716a965e42c5f127a32b1ffc25bdc14d0eb06ee3e09adf04854195b40d252\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"message\\\":\\\"r.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.923957 6221 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924165 6221 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:03.924232 6221 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:03.924346 6221 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924417 6221 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:03.924425 6221 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:03.924516 6221 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:03.924553 6221 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0219 12:47:03.924569 6221 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:03.924590 6221 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:03.925045 6221 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:20Z\\\",\\\"message\\\":\\\"flector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:20.369449 6439 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:20.369490 6439 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:20.369563 6439 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:20.369568 6439 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:20.369596 6439 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 12:47:20.369598 6439 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:20.369619 6439 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:20.369658 6439 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:20.369730 6439 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 12:47:20.369750 6439 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 12:47:20.369762 6439 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 12:47:20.369787 6439 factory.go:656] Stopping watch factory\\\\nI0219 12:47:20.369807 6439 ovnkube.go:599] Stopped ovnkube\\\\nI0219 12:47:20.369845 6439 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:20.369845 6439 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:20 crc kubenswrapper[4833]: I0219 12:47:20.989101 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:20Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.004250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.004358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.004383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.004407 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.004425 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:21Z","lastTransitionTime":"2026-02-19T12:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.012163 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.032657 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.051086 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.069288 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.090079 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.107542 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.107621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.107644 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.107674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.107704 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:21Z","lastTransitionTime":"2026-02-19T12:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.114943 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.210535 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.210603 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.210621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.210650 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.210674 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:21Z","lastTransitionTime":"2026-02-19T12:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.279412 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:08:21.395488007 +0000 UTC Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.312842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.313029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.313122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.313229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.313322 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:21Z","lastTransitionTime":"2026-02-19T12:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.416316 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.416392 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.416412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.416436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.416453 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:21Z","lastTransitionTime":"2026-02-19T12:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.519393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.519452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.519469 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.519491 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.519533 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:21Z","lastTransitionTime":"2026-02-19T12:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.622540 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.622643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.622662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.622688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.622739 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:21Z","lastTransitionTime":"2026-02-19T12:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.726157 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.726613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.726847 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.726931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.726951 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:21Z","lastTransitionTime":"2026-02-19T12:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.797590 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/2.log" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.803351 4833 scope.go:117] "RemoveContainer" containerID="f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c" Feb 19 12:47:21 crc kubenswrapper[4833]: E0219 12:47:21.803558 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.822392 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.829865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.829910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.829925 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.829947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.829963 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:21Z","lastTransitionTime":"2026-02-19T12:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.845573 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.862663 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.883360 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.903322 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.920664 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.933566 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.933919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.934149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.934424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.934880 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:21Z","lastTransitionTime":"2026-02-19T12:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.938423 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.956033 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.975857 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:21 crc kubenswrapper[4833]: I0219 12:47:21.995922 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:21Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.018857 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.044699 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.044758 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.044778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.044834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.044857 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:22Z","lastTransitionTime":"2026-02-19T12:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.064918 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.094351 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.109138 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.130274 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:20Z\\\",\\\"message\\\":\\\"flector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:20.369449 6439 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:20.369490 6439 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:20.369563 6439 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:20.369568 6439 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:20.369596 6439 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 12:47:20.369598 6439 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:20.369619 6439 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:20.369658 6439 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:20.369730 6439 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 12:47:20.369750 6439 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 12:47:20.369762 6439 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 12:47:20.369787 6439 factory.go:656] Stopping watch factory\\\\nI0219 12:47:20.369807 6439 ovnkube.go:599] Stopped ovnkube\\\\nI0219 12:47:20.369845 6439 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:20.369845 6439 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.142547 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.146982 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.147044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.147277 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.147320 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.147332 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:22Z","lastTransitionTime":"2026-02-19T12:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.150732 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.150929 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.151018 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs podName:4177542e-89ba-436d-bc9d-e792f2da656c nodeName:}" failed. No retries permitted until 2026-02-19 12:47:38.150992245 +0000 UTC m=+68.546511093 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs") pod "network-metrics-daemon-clgkm" (UID: "4177542e-89ba-436d-bc9d-e792f2da656c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.257591 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.257715 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.257781 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.257825 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.257866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258001 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258026 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258042 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258101 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:54.258081655 +0000 UTC m=+84.653600433 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258125 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:47:54.258114515 +0000 UTC m=+84.653633293 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.257999 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258163 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:54.258153776 +0000 UTC m=+84.653672554 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258208 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258291 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:54.258266139 +0000 UTC m=+84.653784957 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258416 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258442 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258460 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.258556 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 12:47:54.258492215 +0000 UTC m=+84.654011023 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.259916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.259994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.260012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.260037 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.260054 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:22Z","lastTransitionTime":"2026-02-19T12:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.280290 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 09:47:15.003588684 +0000 UTC Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.314815 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.314875 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.314895 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.314825 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.315045 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.315203 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.315370 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:22 crc kubenswrapper[4833]: E0219 12:47:22.315476 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.363061 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.363143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.363169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.363201 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.363221 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:22Z","lastTransitionTime":"2026-02-19T12:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.466821 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.466902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.466920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.466945 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.466965 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:22Z","lastTransitionTime":"2026-02-19T12:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.542117 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.562206 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.566611 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.570256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.570316 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.570336 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.570364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.570384 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:22Z","lastTransitionTime":"2026-02-19T12:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.589249 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.610778 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.628582 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.647752 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.672380 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.673468 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.673538 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.673550 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.673569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.673581 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:22Z","lastTransitionTime":"2026-02-19T12:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.694488 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.728552 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:20Z\\\",\\\"message\\\":\\\"flector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:20.369449 6439 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:20.369490 6439 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:20.369563 6439 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:20.369568 6439 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:20.369596 6439 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 12:47:20.369598 6439 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:20.369619 6439 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:20.369658 6439 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:20.369730 6439 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 12:47:20.369750 6439 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 12:47:20.369762 6439 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 12:47:20.369787 6439 factory.go:656] Stopping watch factory\\\\nI0219 12:47:20.369807 6439 ovnkube.go:599] Stopped ovnkube\\\\nI0219 12:47:20.369845 6439 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:20.369845 6439 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.745582 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.769753 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.776155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.776216 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.776234 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.776260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.776280 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:22Z","lastTransitionTime":"2026-02-19T12:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.789463 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.808542 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.828214 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.848230 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.874955 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.880059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.880131 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.880156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.880187 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.880209 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:22Z","lastTransitionTime":"2026-02-19T12:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.892955 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:22Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.983668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.983711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.983726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.983745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:22 crc kubenswrapper[4833]: I0219 12:47:22.983757 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:22Z","lastTransitionTime":"2026-02-19T12:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.087606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.087678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.087696 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.087721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.087739 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:23Z","lastTransitionTime":"2026-02-19T12:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.190633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.190697 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.190716 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.190739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.190757 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:23Z","lastTransitionTime":"2026-02-19T12:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.281144 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:07:26.153870956 +0000 UTC Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.293820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.293872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.293889 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.293913 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.293930 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:23Z","lastTransitionTime":"2026-02-19T12:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.397779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.397845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.397864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.397890 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.397907 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:23Z","lastTransitionTime":"2026-02-19T12:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.500995 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.501042 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.501053 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.501070 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.501083 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:23Z","lastTransitionTime":"2026-02-19T12:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.604241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.604308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.604325 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.604350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.604370 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:23Z","lastTransitionTime":"2026-02-19T12:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.707453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.707526 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.707544 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.707569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.707585 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:23Z","lastTransitionTime":"2026-02-19T12:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.810006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.810071 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.810093 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.810122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.810140 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:23Z","lastTransitionTime":"2026-02-19T12:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.912681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.912731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.912748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.912773 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:23 crc kubenswrapper[4833]: I0219 12:47:23.912790 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:23Z","lastTransitionTime":"2026-02-19T12:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.015722 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.015787 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.015810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.015836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.015857 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:24Z","lastTransitionTime":"2026-02-19T12:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.118630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.118688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.118708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.118731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.118749 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:24Z","lastTransitionTime":"2026-02-19T12:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.221869 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.221953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.221973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.222006 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.222029 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:24Z","lastTransitionTime":"2026-02-19T12:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.282356 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:44:17.787469528 +0000 UTC Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.314862 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.314900 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.314943 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:24 crc kubenswrapper[4833]: E0219 12:47:24.315105 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.315186 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:24 crc kubenswrapper[4833]: E0219 12:47:24.315432 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:24 crc kubenswrapper[4833]: E0219 12:47:24.315584 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:24 crc kubenswrapper[4833]: E0219 12:47:24.315684 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.326321 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.326408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.326530 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.326571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.326609 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:24Z","lastTransitionTime":"2026-02-19T12:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.429848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.429911 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.429926 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.429958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.429975 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:24Z","lastTransitionTime":"2026-02-19T12:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.533622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.533679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.533696 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.533723 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.533741 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:24Z","lastTransitionTime":"2026-02-19T12:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.637068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.637715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.638243 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.638698 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.639112 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:24Z","lastTransitionTime":"2026-02-19T12:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.743438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.743523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.743541 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.743565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.743581 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:24Z","lastTransitionTime":"2026-02-19T12:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.846810 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.847393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.847584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.847828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.848029 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:24Z","lastTransitionTime":"2026-02-19T12:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.951450 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.951894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.951980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.952097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:24 crc kubenswrapper[4833]: I0219 12:47:24.952168 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:24Z","lastTransitionTime":"2026-02-19T12:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.056558 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.057035 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.057194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.057395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.057601 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:25Z","lastTransitionTime":"2026-02-19T12:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.160640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.160682 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.160692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.160710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.160724 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:25Z","lastTransitionTime":"2026-02-19T12:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.264943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.265004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.265022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.265045 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.265064 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:25Z","lastTransitionTime":"2026-02-19T12:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.282590 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 07:06:30.356764016 +0000 UTC Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.368633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.368680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.368690 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.368710 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.368724 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:25Z","lastTransitionTime":"2026-02-19T12:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.472177 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.472276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.472298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.472328 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.472350 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:25Z","lastTransitionTime":"2026-02-19T12:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.576033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.576105 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.576128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.576159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.576184 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:25Z","lastTransitionTime":"2026-02-19T12:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.679073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.679162 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.679187 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.679216 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.679233 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:25Z","lastTransitionTime":"2026-02-19T12:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.782877 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.782956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.782980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.783008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.783031 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:25Z","lastTransitionTime":"2026-02-19T12:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.886033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.886092 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.886110 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.886134 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.886152 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:25Z","lastTransitionTime":"2026-02-19T12:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.989106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.989166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.989185 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.989217 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:25 crc kubenswrapper[4833]: I0219 12:47:25.989252 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:25Z","lastTransitionTime":"2026-02-19T12:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.092222 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.092287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.092309 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.092337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.092357 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.195562 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.195612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.195628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.195653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.195670 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.282841 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 04:34:46.196967918 +0000 UTC Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.298528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.298590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.298613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.298641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.298668 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.314099 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.314183 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.314194 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:26 crc kubenswrapper[4833]: E0219 12:47:26.314297 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.314317 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:26 crc kubenswrapper[4833]: E0219 12:47:26.314401 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:26 crc kubenswrapper[4833]: E0219 12:47:26.314477 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:26 crc kubenswrapper[4833]: E0219 12:47:26.314666 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.373528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.373606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.373630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.373662 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.373685 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: E0219 12:47:26.395039 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:26Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.400814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.400897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.400921 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.400957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.400978 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: E0219 12:47:26.423732 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:26Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.428848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.428899 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.428917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.428941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.428958 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: E0219 12:47:26.449069 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:26Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.454489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.454577 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.454595 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.454619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.454635 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: E0219 12:47:26.479078 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:26Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.485229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.485296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.485315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.485342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.485360 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: E0219 12:47:26.508998 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:26Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:26 crc kubenswrapper[4833]: E0219 12:47:26.509222 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.511731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.511814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.511870 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.511904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.511929 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.615069 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.615128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.615145 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.615171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.615191 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.718639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.718704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.718725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.718750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.718768 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.822339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.822410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.822431 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.822459 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.822476 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.926134 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.926183 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.926200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.926225 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:26 crc kubenswrapper[4833]: I0219 12:47:26.926243 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:26Z","lastTransitionTime":"2026-02-19T12:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.029211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.029271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.029291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.029315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.029333 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:27Z","lastTransitionTime":"2026-02-19T12:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.132188 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.132241 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.132251 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.132274 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.132284 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:27Z","lastTransitionTime":"2026-02-19T12:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.235733 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.235839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.235858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.235883 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.235952 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:27Z","lastTransitionTime":"2026-02-19T12:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.283821 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:56:45.430232738 +0000 UTC Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.339470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.339560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.339579 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.339601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.339618 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:27Z","lastTransitionTime":"2026-02-19T12:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.442637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.442704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.442724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.442747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.442764 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:27Z","lastTransitionTime":"2026-02-19T12:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.545552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.545621 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.545639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.545719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.545737 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:27Z","lastTransitionTime":"2026-02-19T12:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.648769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.648826 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.648843 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.648866 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.648885 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:27Z","lastTransitionTime":"2026-02-19T12:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.752278 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.752343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.752366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.752394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.752413 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:27Z","lastTransitionTime":"2026-02-19T12:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.855630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.855708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.855729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.855757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.855773 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:27Z","lastTransitionTime":"2026-02-19T12:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.958567 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.958641 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.958657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.958682 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:27 crc kubenswrapper[4833]: I0219 12:47:27.958699 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:27Z","lastTransitionTime":"2026-02-19T12:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.062171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.062253 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.062271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.062298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.062318 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:28Z","lastTransitionTime":"2026-02-19T12:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.165147 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.165219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.165252 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.165281 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.165305 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:28Z","lastTransitionTime":"2026-02-19T12:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.268364 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.268418 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.268435 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.268457 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.268478 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:28Z","lastTransitionTime":"2026-02-19T12:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.284021 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:49:17.139213015 +0000 UTC Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.314729 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.314831 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.314903 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:28 crc kubenswrapper[4833]: E0219 12:47:28.315086 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.315118 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:28 crc kubenswrapper[4833]: E0219 12:47:28.315247 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:28 crc kubenswrapper[4833]: E0219 12:47:28.315384 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:28 crc kubenswrapper[4833]: E0219 12:47:28.315467 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.371292 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.371351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.371372 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.371400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.371419 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:28Z","lastTransitionTime":"2026-02-19T12:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.473666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.473731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.473754 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.473784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.473804 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:28Z","lastTransitionTime":"2026-02-19T12:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.576656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.576724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.576741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.576764 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.576781 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:28Z","lastTransitionTime":"2026-02-19T12:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.679319 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.679390 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.679416 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.679447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.679472 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:28Z","lastTransitionTime":"2026-02-19T12:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.782448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.782488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.782513 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.782541 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.782549 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:28Z","lastTransitionTime":"2026-02-19T12:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.885168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.885284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.885357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.885384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.885452 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:28Z","lastTransitionTime":"2026-02-19T12:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.988703 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.988770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.988788 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.988816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:28 crc kubenswrapper[4833]: I0219 12:47:28.988833 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:28Z","lastTransitionTime":"2026-02-19T12:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.091803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.091935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.091953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.091976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.091993 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:29Z","lastTransitionTime":"2026-02-19T12:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.194400 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.194449 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.194466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.194488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.194553 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:29Z","lastTransitionTime":"2026-02-19T12:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.284155 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 20:08:39.318888082 +0000 UTC Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.297792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.297890 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.297917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.297950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.297974 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:29Z","lastTransitionTime":"2026-02-19T12:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.401354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.401448 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.401470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.401528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.401549 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:29Z","lastTransitionTime":"2026-02-19T12:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.504955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.505013 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.505030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.505054 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.505086 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:29Z","lastTransitionTime":"2026-02-19T12:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.608131 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.608196 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.608219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.608250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.608293 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:29Z","lastTransitionTime":"2026-02-19T12:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.711912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.711978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.711998 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.712022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.712038 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:29Z","lastTransitionTime":"2026-02-19T12:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.853527 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.853572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.853583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.853605 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.853616 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:29Z","lastTransitionTime":"2026-02-19T12:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.956696 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.956752 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.956774 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.956805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:29 crc kubenswrapper[4833]: I0219 12:47:29.956825 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:29Z","lastTransitionTime":"2026-02-19T12:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.060020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.060117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.060166 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.060191 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.060208 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:30Z","lastTransitionTime":"2026-02-19T12:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.163255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.163319 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.163335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.163362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.163380 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:30Z","lastTransitionTime":"2026-02-19T12:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.265985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.266053 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.266077 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.266108 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.266132 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:30Z","lastTransitionTime":"2026-02-19T12:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.284539 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 10:05:55.078383264 +0000 UTC Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.314184 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.314281 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.314197 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.314200 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:30 crc kubenswrapper[4833]: E0219 12:47:30.314418 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:30 crc kubenswrapper[4833]: E0219 12:47:30.314568 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:30 crc kubenswrapper[4833]: E0219 12:47:30.314722 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:30 crc kubenswrapper[4833]: E0219 12:47:30.314842 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.351728 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:20Z\\\",\\\"message\\\":\\\"flector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:20.369449 6439 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:20.369490 6439 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:20.369563 6439 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:20.369568 6439 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:20.369596 6439 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 12:47:20.369598 6439 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:20.369619 6439 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:20.369658 6439 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:20.369730 6439 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 12:47:20.369750 6439 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 12:47:20.369762 6439 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 12:47:20.369787 6439 factory.go:656] Stopping watch factory\\\\nI0219 12:47:20.369807 6439 ovnkube.go:599] Stopped ovnkube\\\\nI0219 12:47:20.369845 6439 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:20.369845 6439 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.369584 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.369724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.369745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.369770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.369835 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:30Z","lastTransitionTime":"2026-02-19T12:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.372279 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.395269 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.416959 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.441660 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.459680 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.473295 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.473347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.473366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.473391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.473410 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:30Z","lastTransitionTime":"2026-02-19T12:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.480054 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.505412 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.523777 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.543320 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.563514 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.577186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.577273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.577297 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.577331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.577354 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:30Z","lastTransitionTime":"2026-02-19T12:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.583032 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.598378 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.612986 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.632384 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.650642 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dfd21a-8b34-4a6f-8b53-f220425fb369\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe0c469c1d92b8cc814628a16596c31a9dae61fdc7820423d94a8a25c622c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6facdd3120ee01990d31602f404ba1bfdd78ebfe3de3e0208a1f1058bede3472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef38f10daeecfe4798979b037586cb7553da6ed81347ba4a1c2f1fa6671e269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.672430 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:30Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.680462 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.680564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.680585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.680609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.680628 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:30Z","lastTransitionTime":"2026-02-19T12:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.783169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.783230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.783245 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.783265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.783281 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:30Z","lastTransitionTime":"2026-02-19T12:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.886088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.886141 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.886152 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.886169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.886182 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:30Z","lastTransitionTime":"2026-02-19T12:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.988718 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.988804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.988825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.988851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:30 crc kubenswrapper[4833]: I0219 12:47:30.988871 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:30Z","lastTransitionTime":"2026-02-19T12:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.092168 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.092226 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.092242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.092265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.092283 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:31Z","lastTransitionTime":"2026-02-19T12:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.194249 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.194340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.194365 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.194398 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.194421 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:31Z","lastTransitionTime":"2026-02-19T12:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.285302 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 04:45:42.456954765 +0000 UTC Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.298447 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.298523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.298542 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.298565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.298582 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:31Z","lastTransitionTime":"2026-02-19T12:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.402017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.402080 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.402096 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.402120 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.402138 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:31Z","lastTransitionTime":"2026-02-19T12:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.504948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.505033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.505055 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.505082 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.505103 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:31Z","lastTransitionTime":"2026-02-19T12:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.609231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.609284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.609300 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.609324 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.609341 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:31Z","lastTransitionTime":"2026-02-19T12:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.712066 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.712134 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.712159 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.712190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.712212 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:31Z","lastTransitionTime":"2026-02-19T12:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.815858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.815905 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.815923 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.815947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.815963 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:31Z","lastTransitionTime":"2026-02-19T12:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.918472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.918572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.918592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.918615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:31 crc kubenswrapper[4833]: I0219 12:47:31.918631 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:31Z","lastTransitionTime":"2026-02-19T12:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.021535 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.021604 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.021622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.021647 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.021665 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:32Z","lastTransitionTime":"2026-02-19T12:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.125704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.125762 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.125781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.125804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.125822 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:32Z","lastTransitionTime":"2026-02-19T12:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.229878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.229956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.229979 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.230025 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.230048 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:32Z","lastTransitionTime":"2026-02-19T12:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.285753 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 11:23:32.021721933 +0000 UTC Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.314421 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:32 crc kubenswrapper[4833]: E0219 12:47:32.314663 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.314712 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:32 crc kubenswrapper[4833]: E0219 12:47:32.314848 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.314943 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:32 crc kubenswrapper[4833]: E0219 12:47:32.315028 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.315233 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:32 crc kubenswrapper[4833]: E0219 12:47:32.315338 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.332949 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.332985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.333000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.333018 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.333032 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:32Z","lastTransitionTime":"2026-02-19T12:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.435288 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.435358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.435376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.435405 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.435427 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:32Z","lastTransitionTime":"2026-02-19T12:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.537552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.537597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.537612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.537632 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.537646 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:32Z","lastTransitionTime":"2026-02-19T12:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.648357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.648470 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.648530 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.648566 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.648604 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:32Z","lastTransitionTime":"2026-02-19T12:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.752768 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.752832 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.752842 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.752862 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.752875 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:32Z","lastTransitionTime":"2026-02-19T12:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.856314 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.856371 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.856383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.856404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.856420 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:32Z","lastTransitionTime":"2026-02-19T12:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.959386 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.959445 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.959463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.959487 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:32 crc kubenswrapper[4833]: I0219 12:47:32.959545 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:32Z","lastTransitionTime":"2026-02-19T12:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.062373 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.062422 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.062438 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.062460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.062479 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:33Z","lastTransitionTime":"2026-02-19T12:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.166615 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.166714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.166741 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.167220 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.167765 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:33Z","lastTransitionTime":"2026-02-19T12:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.271771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.271902 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.271920 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.271945 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.271997 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:33Z","lastTransitionTime":"2026-02-19T12:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.286199 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 10:27:16.105932541 +0000 UTC Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.374412 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.374467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.374484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.374531 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.374552 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:33Z","lastTransitionTime":"2026-02-19T12:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.477103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.477176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.477202 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.477228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.477245 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:33Z","lastTransitionTime":"2026-02-19T12:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.579791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.579840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.579851 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.579867 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.579878 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:33Z","lastTransitionTime":"2026-02-19T12:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.682667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.682717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.682728 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.682746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.682758 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:33Z","lastTransitionTime":"2026-02-19T12:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.785570 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.785622 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.785637 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.785656 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.785669 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:33Z","lastTransitionTime":"2026-02-19T12:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.888820 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.888861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.888875 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.888894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.888906 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:33Z","lastTransitionTime":"2026-02-19T12:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.990808 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.990849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.990858 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.990872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:33 crc kubenswrapper[4833]: I0219 12:47:33.990882 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:33Z","lastTransitionTime":"2026-02-19T12:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.093581 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.093643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.093660 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.093688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.093706 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:34Z","lastTransitionTime":"2026-02-19T12:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.196228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.196261 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.196269 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.196283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.196293 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:34Z","lastTransitionTime":"2026-02-19T12:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.287379 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 23:17:50.689845209 +0000 UTC Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.299330 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.299376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.299391 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.299409 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.299422 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:34Z","lastTransitionTime":"2026-02-19T12:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.314828 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.314844 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.314880 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:34 crc kubenswrapper[4833]: E0219 12:47:34.314935 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.314837 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:34 crc kubenswrapper[4833]: E0219 12:47:34.315081 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:34 crc kubenswrapper[4833]: E0219 12:47:34.315171 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:34 crc kubenswrapper[4833]: E0219 12:47:34.315802 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.315883 4833 scope.go:117] "RemoveContainer" containerID="f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c" Feb 19 12:47:34 crc kubenswrapper[4833]: E0219 12:47:34.316059 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.402460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.402546 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.402574 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.402605 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.402623 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:34Z","lastTransitionTime":"2026-02-19T12:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.505387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.505441 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.505460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.505484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.505528 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:34Z","lastTransitionTime":"2026-02-19T12:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.608614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.608675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.608685 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.608701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.608712 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:34Z","lastTransitionTime":"2026-02-19T12:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.711123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.711233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.711257 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.711288 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.711306 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:34Z","lastTransitionTime":"2026-02-19T12:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.813781 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.813818 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.813829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.813844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.813854 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:34Z","lastTransitionTime":"2026-02-19T12:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.915805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.915868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.915877 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.915892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:34 crc kubenswrapper[4833]: I0219 12:47:34.915902 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:34Z","lastTransitionTime":"2026-02-19T12:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.018255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.018326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.018340 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.018358 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.018370 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:35Z","lastTransitionTime":"2026-02-19T12:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.120073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.120107 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.120117 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.120132 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.120159 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:35Z","lastTransitionTime":"2026-02-19T12:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.222369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.222405 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.222417 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.222432 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.222443 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:35Z","lastTransitionTime":"2026-02-19T12:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.287798 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:35:46.298337675 +0000 UTC Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.324866 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.324919 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.324937 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.324959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.324976 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:35Z","lastTransitionTime":"2026-02-19T12:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.427282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.427335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.427356 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.427381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.427400 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:35Z","lastTransitionTime":"2026-02-19T12:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.530342 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.530427 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.530452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.530484 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.530553 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:35Z","lastTransitionTime":"2026-02-19T12:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.633485 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.633541 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.633550 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.633564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.633576 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:35Z","lastTransitionTime":"2026-02-19T12:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.737003 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.737044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.737052 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.737066 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.737075 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:35Z","lastTransitionTime":"2026-02-19T12:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.840524 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.840588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.840605 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.840627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.840643 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:35Z","lastTransitionTime":"2026-02-19T12:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.943916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.943957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.943967 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.943982 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:35 crc kubenswrapper[4833]: I0219 12:47:35.943993 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:35Z","lastTransitionTime":"2026-02-19T12:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.046651 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.046717 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.046731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.046757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.046776 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.150585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.150665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.150695 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.150731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.150756 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.254344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.254426 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.254451 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.254481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.254532 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.287936 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 22:42:54.285825091 +0000 UTC Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.314378 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.314555 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.314421 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:36 crc kubenswrapper[4833]: E0219 12:47:36.314651 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.314702 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:36 crc kubenswrapper[4833]: E0219 12:47:36.314840 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:36 crc kubenswrapper[4833]: E0219 12:47:36.314983 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:36 crc kubenswrapper[4833]: E0219 12:47:36.315170 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.356657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.356730 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.356747 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.356771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.356790 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.459053 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.459101 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.459114 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.459134 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.459146 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.561983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.562033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.562050 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.562072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.562088 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.635978 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.636015 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.636024 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.636040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.636052 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: E0219 12:47:36.655343 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:36Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.660118 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.660169 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.660188 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.660213 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.660230 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: E0219 12:47:36.677899 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:36Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.683461 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.683527 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.683539 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.683555 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.683567 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: E0219 12:47:36.701599 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:36Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.705583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.705640 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.705657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.705680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.705700 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: E0219 12:47:36.723206 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:36Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.727406 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.727454 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.727467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.727481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.727519 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: E0219 12:47:36.744792 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:36Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:36 crc kubenswrapper[4833]: E0219 12:47:36.745017 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.746705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.746757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.746771 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.746805 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.746818 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.850127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.850219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.850237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.850294 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.850312 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.953657 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.953711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.953727 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.953753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:36 crc kubenswrapper[4833]: I0219 12:47:36.953773 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:36Z","lastTransitionTime":"2026-02-19T12:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.056419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.056472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.056483 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.056515 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.056530 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:37Z","lastTransitionTime":"2026-02-19T12:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.158959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.159010 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.159022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.159040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.159052 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:37Z","lastTransitionTime":"2026-02-19T12:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.261868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.261924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.261934 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.261948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.261956 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:37Z","lastTransitionTime":"2026-02-19T12:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.288540 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 07:25:52.4073829 +0000 UTC Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.364609 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.364671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.364694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.364748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.364765 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:37Z","lastTransitionTime":"2026-02-19T12:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.467775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.467834 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.467859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.467888 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.467910 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:37Z","lastTransitionTime":"2026-02-19T12:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.569664 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.569705 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.569716 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.569732 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.569744 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:37Z","lastTransitionTime":"2026-02-19T12:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.675327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.675379 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.675395 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.675436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.675456 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:37Z","lastTransitionTime":"2026-02-19T12:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.778952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.779017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.779034 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.779060 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.779079 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:37Z","lastTransitionTime":"2026-02-19T12:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.881711 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.881979 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.882156 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.882315 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.882437 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:37Z","lastTransitionTime":"2026-02-19T12:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.985463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.985572 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.985593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.985618 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:37 crc kubenswrapper[4833]: I0219 12:47:37.985635 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:37Z","lastTransitionTime":"2026-02-19T12:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.087928 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.088041 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.088066 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.088095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.088115 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:38Z","lastTransitionTime":"2026-02-19T12:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.190918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.190972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.190988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.191010 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.191027 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:38Z","lastTransitionTime":"2026-02-19T12:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.233992 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:38 crc kubenswrapper[4833]: E0219 12:47:38.234187 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:38 crc kubenswrapper[4833]: E0219 12:47:38.234343 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs podName:4177542e-89ba-436d-bc9d-e792f2da656c nodeName:}" failed. No retries permitted until 2026-02-19 12:48:10.234302108 +0000 UTC m=+100.629820916 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs") pod "network-metrics-daemon-clgkm" (UID: "4177542e-89ba-436d-bc9d-e792f2da656c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.289562 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:12:26.956662177 +0000 UTC Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.293597 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.293804 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.293948 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.294081 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.294215 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:38Z","lastTransitionTime":"2026-02-19T12:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.314941 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.315016 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.315112 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.315207 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:38 crc kubenswrapper[4833]: E0219 12:47:38.315565 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:38 crc kubenswrapper[4833]: E0219 12:47:38.315340 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:38 crc kubenswrapper[4833]: E0219 12:47:38.315895 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:38 crc kubenswrapper[4833]: E0219 12:47:38.316020 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.397149 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.397752 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.398033 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.398267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.398486 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:38Z","lastTransitionTime":"2026-02-19T12:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.501755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.502044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.502212 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.502376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.502578 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:38Z","lastTransitionTime":"2026-02-19T12:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.605956 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.606384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.606632 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.606829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.607284 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:38Z","lastTransitionTime":"2026-02-19T12:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.709571 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.709631 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.709648 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.709671 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.709687 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:38Z","lastTransitionTime":"2026-02-19T12:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.811819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.811892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.811915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.811945 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.811966 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:38Z","lastTransitionTime":"2026-02-19T12:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.914097 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.914125 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.914136 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.914148 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:38 crc kubenswrapper[4833]: I0219 12:47:38.914157 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:38Z","lastTransitionTime":"2026-02-19T12:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.015845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.016200 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.016347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.016547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.016654 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:39Z","lastTransitionTime":"2026-02-19T12:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.118807 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.118840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.118849 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.118861 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.118870 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:39Z","lastTransitionTime":"2026-02-19T12:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.221491 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.221567 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.221585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.221605 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.221621 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:39Z","lastTransitionTime":"2026-02-19T12:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.291230 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 07:59:03.945546803 +0000 UTC Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.323916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.323942 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.323950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.323964 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.323973 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:39Z","lastTransitionTime":"2026-02-19T12:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.426481 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.426824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.426966 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.427113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.427255 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:39Z","lastTransitionTime":"2026-02-19T12:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.535734 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.535792 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.535819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.535844 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.535862 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:39Z","lastTransitionTime":"2026-02-19T12:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.638463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.638784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.639123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.639460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.639771 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:39Z","lastTransitionTime":"2026-02-19T12:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.742408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.742472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.742488 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.742529 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.742547 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:39Z","lastTransitionTime":"2026-02-19T12:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.846222 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.846259 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.846267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.846282 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.846291 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:39Z","lastTransitionTime":"2026-02-19T12:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.889624 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p75n_4e1957a0-ea7d-4831-ae8f-630a9529ece1/kube-multus/0.log" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.889667 4833 generic.go:334] "Generic (PLEG): container finished" podID="4e1957a0-ea7d-4831-ae8f-630a9529ece1" containerID="55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e" exitCode=1 Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.889691 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p75n" event={"ID":"4e1957a0-ea7d-4831-ae8f-630a9529ece1","Type":"ContainerDied","Data":"55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.890040 4833 scope.go:117] "RemoveContainer" containerID="55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.909667 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:39Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.928859 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dfd21a-8b34-4a6f-8b53-f220425fb369\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe0c469c1d92b8cc814628a16596c31a9dae61fdc7820423d94a8a25c622c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6facdd3120ee01990d31602f404ba1bfdd78ebfe3de3e0208a1f1058bede3472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef38f10daeecfe4798979b037586cb7553da6ed81347ba4a1c2f1fa6671e269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:39Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.945817 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:39Z\\\",\\\"message\\\":\\\"2026-02-19T12:46:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea\\\\n2026-02-19T12:46:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea to /host/opt/cni/bin/\\\\n2026-02-19T12:46:54Z [verbose] multus-daemon started\\\\n2026-02-19T12:46:54Z [verbose] Readiness Indicator file check\\\\n2026-02-19T12:47:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:39Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.948672 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.948713 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.948725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.948743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.948755 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:39Z","lastTransitionTime":"2026-02-19T12:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.958143 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:39Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.974785 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:39Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:39 crc kubenswrapper[4833]: I0219 12:47:39.992145 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:39Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.010278 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.035056 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:20Z\\\",\\\"message\\\":\\\"flector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:20.369449 6439 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:20.369490 6439 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:20.369563 6439 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:20.369568 6439 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:20.369596 6439 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 12:47:20.369598 6439 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:20.369619 6439 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:20.369658 6439 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:20.369730 6439 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 12:47:20.369750 6439 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 12:47:20.369762 6439 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 12:47:20.369787 6439 factory.go:656] Stopping watch factory\\\\nI0219 12:47:20.369807 6439 ovnkube.go:599] Stopped ovnkube\\\\nI0219 12:47:20.369845 6439 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:20.369845 6439 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.046607 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.050859 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.050989 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.051067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.051127 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.051184 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:40Z","lastTransitionTime":"2026-02-19T12:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.060004 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.075386 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.092087 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.107078 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.121387 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.133910 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.147581 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.153189 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.153228 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.153246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.153268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.153285 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:40Z","lastTransitionTime":"2026-02-19T12:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.162807 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.255915 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.255974 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.255992 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.256014 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.256051 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:40Z","lastTransitionTime":"2026-02-19T12:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.292246 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 02:57:47.786277963 +0000 UTC Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.314583 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.314616 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:40 crc kubenswrapper[4833]: E0219 12:47:40.314680 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.314759 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:40 crc kubenswrapper[4833]: E0219 12:47:40.314891 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.314968 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:40 crc kubenswrapper[4833]: E0219 12:47:40.315021 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:40 crc kubenswrapper[4833]: E0219 12:47:40.315165 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.326808 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.339946 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.348257 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.358134 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.358227 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.358271 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.358289 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.358316 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.358335 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:40Z","lastTransitionTime":"2026-02-19T12:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.370333 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.380686 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.391712 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.407306 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.418856 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dfd21a-8b34-4a6f-8b53-f220425fb369\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe0c469c1d92b8cc814628a16596c31a9dae61fdc7820423d94a8a25c622c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6facdd3120ee01990d31602f404ba1bfdd78ebfe3de3e0208a1f1058bede3472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef38f10daeecfe4798979b037586cb7553da6ed81347ba4a1c2f1fa6671e269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.440272 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:39Z\\\",\\\"message\\\":\\\"2026-02-19T12:46:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea\\\\n2026-02-19T12:46:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea to /host/opt/cni/bin/\\\\n2026-02-19T12:46:54Z [verbose] multus-daemon started\\\\n2026-02-19T12:46:54Z [verbose] Readiness Indicator file check\\\\n2026-02-19T12:47:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.453441 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.462304 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.462327 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.462337 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.462351 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.462363 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:40Z","lastTransitionTime":"2026-02-19T12:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.463188 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.475405 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.486927 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.513087 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:20Z\\\",\\\"message\\\":\\\"flector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:20.369449 6439 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:20.369490 6439 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:20.369563 6439 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:20.369568 6439 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:20.369596 6439 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 12:47:20.369598 6439 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:20.369619 6439 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:20.369658 6439 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:20.369730 6439 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 12:47:20.369750 6439 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 12:47:20.369762 6439 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 12:47:20.369787 6439 factory.go:656] Stopping watch factory\\\\nI0219 12:47:20.369807 6439 ovnkube.go:599] Stopped ovnkube\\\\nI0219 12:47:20.369845 6439 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:20.369845 6439 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.527307 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.548013 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.565811 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.565850 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.565863 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.565880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.565891 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:40Z","lastTransitionTime":"2026-02-19T12:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.669182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.669219 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.669230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.669244 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.669255 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:40Z","lastTransitionTime":"2026-02-19T12:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.771985 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.772042 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.772060 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.772084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.772101 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:40Z","lastTransitionTime":"2026-02-19T12:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.874283 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.874335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.874354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.874378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.874398 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:40Z","lastTransitionTime":"2026-02-19T12:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.895181 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p75n_4e1957a0-ea7d-4831-ae8f-630a9529ece1/kube-multus/0.log" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.895241 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p75n" event={"ID":"4e1957a0-ea7d-4831-ae8f-630a9529ece1","Type":"ContainerStarted","Data":"7038937048d6f77bfed5b0b521c844fe325b30f343970d1b5f654ea93f433aae"} Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.921840 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:20Z\\\",\\\"message\\\":\\\"flector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:20.369449 6439 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:20.369490 6439 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:20.369563 6439 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:20.369568 6439 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:20.369596 6439 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 12:47:20.369598 6439 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:20.369619 6439 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:20.369658 6439 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:20.369730 6439 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 12:47:20.369750 6439 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 12:47:20.369762 6439 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 12:47:20.369787 6439 factory.go:656] Stopping watch factory\\\\nI0219 12:47:20.369807 6439 ovnkube.go:599] Stopped ovnkube\\\\nI0219 12:47:20.369845 6439 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:20.369845 6439 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.937262 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.954730 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.973251 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.981434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.981463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.981474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.981489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.981517 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:40Z","lastTransitionTime":"2026-02-19T12:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:40 crc kubenswrapper[4833]: I0219 12:47:40.988349 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:40Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.006628 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.023653 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.066246 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.083896 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.083960 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.083976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.084000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.084019 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:41Z","lastTransitionTime":"2026-02-19T12:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.084512 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.104224 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.124041 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.137052 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.150133 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.159923 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.173889 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.186347 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.186378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.186389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.186404 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.186415 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:41Z","lastTransitionTime":"2026-02-19T12:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.191451 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dfd21a-8b34-4a6f-8b53-f220425fb369\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe0c469c1d92b8cc814628a16596c31a9dae61fdc7820423d94a8a25c622c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6facdd3120ee01990d31602f404ba1bfdd78ebfe3de3e0208a1f1058bede3472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef38f10daeecfe4798979b037586cb7553da6ed81347ba4a1c2f1fa6671e269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.208838 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7038937048d6f77bfed5b0b521c844fe325b30f343970d1b5f654ea93f433aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:39Z\\\",\\\"message\\\":\\\"2026-02-19T12:46:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea\\\\n2026-02-19T12:46:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea to /host/opt/cni/bin/\\\\n2026-02-19T12:46:54Z [verbose] multus-daemon started\\\\n2026-02-19T12:46:54Z [verbose] Readiness Indicator file check\\\\n2026-02-19T12:47:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:41Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.289325 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.289382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.289396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.289410 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.289419 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:41Z","lastTransitionTime":"2026-02-19T12:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.292757 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 00:41:32.685776218 +0000 UTC Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.391794 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.391827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.391837 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.391852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.391862 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:41Z","lastTransitionTime":"2026-02-19T12:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.493841 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.493895 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.493906 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.493924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.493936 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:41Z","lastTransitionTime":"2026-02-19T12:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.595680 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.595726 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.595738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.595755 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.595767 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:41Z","lastTransitionTime":"2026-02-19T12:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.698702 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.699143 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.699457 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.699708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.699887 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:41Z","lastTransitionTime":"2026-02-19T12:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.802293 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.802333 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.802345 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.802360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.802373 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:41Z","lastTransitionTime":"2026-02-19T12:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.905020 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.905059 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.905072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.905088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:41 crc kubenswrapper[4833]: I0219 12:47:41.905099 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:41Z","lastTransitionTime":"2026-02-19T12:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.008290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.008578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.008675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.008769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.008852 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:42Z","lastTransitionTime":"2026-02-19T12:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.111643 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.111670 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.111679 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.111691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.111700 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:42Z","lastTransitionTime":"2026-02-19T12:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.214126 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.214139 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.214146 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.214155 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.214163 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:42Z","lastTransitionTime":"2026-02-19T12:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.293041 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:56:46.842941394 +0000 UTC Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.315321 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.315385 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:42 crc kubenswrapper[4833]: E0219 12:47:42.315887 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.315918 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:42 crc kubenswrapper[4833]: E0219 12:47:42.316044 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:42 crc kubenswrapper[4833]: E0219 12:47:42.316120 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.315911 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:42 crc kubenswrapper[4833]: E0219 12:47:42.316588 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.319294 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.319344 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.319361 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.319383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.319404 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:42Z","lastTransitionTime":"2026-02-19T12:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.421826 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.421938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.421962 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.421991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.422013 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:42Z","lastTransitionTime":"2026-02-19T12:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.524901 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.524963 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.524981 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.525005 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.525023 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:42Z","lastTransitionTime":"2026-02-19T12:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.628384 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.628649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.628709 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.628780 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.628837 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:42Z","lastTransitionTime":"2026-02-19T12:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.730973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.731234 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.731311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.731394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.731576 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:42Z","lastTransitionTime":"2026-02-19T12:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.834022 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.834362 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.834466 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.834606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.834688 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:42Z","lastTransitionTime":"2026-02-19T12:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.936958 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.936997 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.937007 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.937026 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:42 crc kubenswrapper[4833]: I0219 12:47:42.937040 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:42Z","lastTransitionTime":"2026-02-19T12:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.039636 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.039672 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.039683 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.039701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.039713 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:43Z","lastTransitionTime":"2026-02-19T12:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.141934 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.141993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.142004 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.142021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.142030 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:43Z","lastTransitionTime":"2026-02-19T12:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.244430 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.244569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.244590 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.244614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.244631 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:43Z","lastTransitionTime":"2026-02-19T12:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.293779 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 11:47:14.064659143 +0000 UTC Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.347752 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.347809 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.347829 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.347856 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.347875 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:43Z","lastTransitionTime":"2026-02-19T12:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.450436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.450473 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.450482 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.450511 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.450520 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:43Z","lastTransitionTime":"2026-02-19T12:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.552850 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.552912 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.552922 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.552936 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.552944 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:43Z","lastTransitionTime":"2026-02-19T12:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.655583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.655626 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.655635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.655653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.655663 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:43Z","lastTransitionTime":"2026-02-19T12:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.758218 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.758258 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.758267 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.758280 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.758290 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:43Z","lastTransitionTime":"2026-02-19T12:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.860629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.860698 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.860721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.860750 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.860772 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:43Z","lastTransitionTime":"2026-02-19T12:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.963326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.963369 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.963380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.963396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:43 crc kubenswrapper[4833]: I0219 12:47:43.963406 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:43Z","lastTransitionTime":"2026-02-19T12:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.066892 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.066931 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.066950 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.066976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.066987 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:44Z","lastTransitionTime":"2026-02-19T12:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.169486 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.169569 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.169588 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.169610 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.169626 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:44Z","lastTransitionTime":"2026-02-19T12:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.272769 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.272823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.272840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.272865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.272882 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:44Z","lastTransitionTime":"2026-02-19T12:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.294470 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 09:36:20.145910333 +0000 UTC Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.314126 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.314152 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.314220 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:44 crc kubenswrapper[4833]: E0219 12:47:44.314235 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:44 crc kubenswrapper[4833]: E0219 12:47:44.314301 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.314259 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:44 crc kubenswrapper[4833]: E0219 12:47:44.314359 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:44 crc kubenswrapper[4833]: E0219 12:47:44.314592 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.375700 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.375729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.375756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.375768 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.375776 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:44Z","lastTransitionTime":"2026-02-19T12:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.477775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.477816 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.477827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.477847 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.477860 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:44Z","lastTransitionTime":"2026-02-19T12:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.580198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.580265 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.580284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.580308 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.580325 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:44Z","lastTransitionTime":"2026-02-19T12:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.682715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.682745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.682757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.682772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.682783 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:44Z","lastTransitionTime":"2026-02-19T12:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.785233 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.785291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.785307 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.785329 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.785345 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:44Z","lastTransitionTime":"2026-02-19T12:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.915908 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.915935 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.915942 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.915954 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:44 crc kubenswrapper[4833]: I0219 12:47:44.915965 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:44Z","lastTransitionTime":"2026-02-19T12:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.018030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.018074 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.018084 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.018100 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.018112 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:45Z","lastTransitionTime":"2026-02-19T12:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.121067 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.121113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.121123 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.121137 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.121146 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:45Z","lastTransitionTime":"2026-02-19T12:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.223799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.223864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.223881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.223914 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.223937 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:45Z","lastTransitionTime":"2026-02-19T12:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.294814 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:30:27.924568257 +0000 UTC Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.316232 4833 scope.go:117] "RemoveContainer" containerID="f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.337663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.337735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.337752 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.337775 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.337792 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:45Z","lastTransitionTime":"2026-02-19T12:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.439704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.439746 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.439761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.439786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.439802 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:45Z","lastTransitionTime":"2026-02-19T12:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.543669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.543743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.543756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.543772 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.543784 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:45Z","lastTransitionTime":"2026-02-19T12:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.647645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.647696 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.647715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.647737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.647753 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:45Z","lastTransitionTime":"2026-02-19T12:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.750540 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.750613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.750636 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.750666 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.750683 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:45Z","lastTransitionTime":"2026-02-19T12:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.853467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.853544 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.853560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.853583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.853616 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:45Z","lastTransitionTime":"2026-02-19T12:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.925544 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/2.log" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.929164 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.929733 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.956199 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.956246 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.956258 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.956272 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.956282 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:45Z","lastTransitionTime":"2026-02-19T12:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.956211 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:45Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:45 crc kubenswrapper[4833]: I0219 12:47:45.982556 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dfd21a-8b34-4a6f-8b53-f220425fb369\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe0c469c1d92b8cc814628a16596c31a9dae61fdc7820423d94a8a25c622c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6facdd3120ee01990d31602f404ba1bfdd78ebfe3de3e0208a1f1058bede3472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef38f10daeecfe4798979b037586cb7553da6ed81347ba4a1c2f1fa6671e269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:45Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.039645 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7038937048d6f77bfed5b0b521c844fe325b30f343970d1b5f654ea93f433aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:39Z\\\",\\\"message\\\":\\\"2026-02-19T12:46:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea\\\\n2026-02-19T12:46:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea to /host/opt/cni/bin/\\\\n2026-02-19T12:46:54Z [verbose] multus-daemon started\\\\n2026-02-19T12:46:54Z [verbose] Readiness Indicator file check\\\\n2026-02-19T12:47:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.058681 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.058724 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.058735 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.058766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.058783 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:46Z","lastTransitionTime":"2026-02-19T12:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.068089 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:20Z\\\",\\\"message\\\":\\\"flector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:20.369449 6439 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:20.369490 6439 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:20.369563 6439 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:20.369568 6439 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:20.369596 6439 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 12:47:20.369598 6439 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:20.369619 6439 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:20.369658 6439 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:20.369730 6439 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 12:47:20.369750 6439 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 12:47:20.369762 6439 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 12:47:20.369787 6439 factory.go:656] Stopping watch factory\\\\nI0219 12:47:20.369807 6439 ovnkube.go:599] Stopped ovnkube\\\\nI0219 12:47:20.369845 6439 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:20.369845 6439 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.091004 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.113188 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.131071 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.144227 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.158181 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.161250 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.161311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.161331 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.161359 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.161387 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:46Z","lastTransitionTime":"2026-02-19T12:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.170129 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.187888 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.202280 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.216094 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.229879 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.243664 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.251877 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.264570 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.264888 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.264993 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.265073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.265171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.265252 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:46Z","lastTransitionTime":"2026-02-19T12:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.295094 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 01:50:38.774400553 +0000 UTC Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.314754 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.314764 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.314785 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:46 crc kubenswrapper[4833]: E0219 12:47:46.314862 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.314889 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:46 crc kubenswrapper[4833]: E0219 12:47:46.314989 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:46 crc kubenswrapper[4833]: E0219 12:47:46.315015 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:46 crc kubenswrapper[4833]: E0219 12:47:46.315067 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.368015 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.368074 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.368087 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.368106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.368122 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:46Z","lastTransitionTime":"2026-02-19T12:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.470972 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.471030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.471048 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.471113 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.471132 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:46Z","lastTransitionTime":"2026-02-19T12:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.574694 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.574753 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.574770 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.574795 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.574812 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:46Z","lastTransitionTime":"2026-02-19T12:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.677973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.678040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.678062 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.678094 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.678118 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:46Z","lastTransitionTime":"2026-02-19T12:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.781576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.781635 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.781652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.781674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.781692 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:46Z","lastTransitionTime":"2026-02-19T12:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.884376 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.884445 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.884463 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.884490 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.884540 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:46Z","lastTransitionTime":"2026-02-19T12:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.939690 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/3.log" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.940551 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/2.log" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.945109 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" exitCode=1 Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.945168 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.945215 4833 scope.go:117] "RemoveContainer" containerID="f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.946444 4833 scope.go:117] "RemoveContainer" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" Feb 19 12:47:46 crc kubenswrapper[4833]: E0219 12:47:46.946815 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.961824 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.961918 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.961938 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.961969 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.961990 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:46Z","lastTransitionTime":"2026-02-19T12:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.967773 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: E0219 12:47:46.980411 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.987381 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.987445 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.987464 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.987653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.987697 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:46Z","lastTransitionTime":"2026-02-19T12:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:46 crc kubenswrapper[4833]: I0219 12:47:46.992630 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:46Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: E0219 12:47:47.008629 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.011572 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.015263 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.015306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.015324 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.015350 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.015368 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.028175 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: E0219 12:47:47.036155 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.041535 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.041614 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.041633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.041661 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.041681 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.046905 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: E0219 12:47:47.063403 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.069158 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.069326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.069383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.069402 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.069428 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.069448 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.095428 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dfd21a-8b34-4a6f-8b53-f220425fb369\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe0c469c1d92b8cc814628a16596c31a9dae61fdc7820423d94a8a25c622c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6facdd3120ee01990d31602f404ba1bfdd78ebfe3de3e0208a1f1058bede3472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef38f10daeecfe4798979b037586cb7553da6ed81347ba4a1c2f1fa6671e269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: E0219 12:47:47.099983 4833 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"9bc9539f-520c-4440-a07d-375a239a8e0f\\\",\\\"systemUUID\\\":\\\"dc14cf1a-5576-4d69-98fb-0c44d3f24b1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: E0219 12:47:47.100212 4833 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.106017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.106103 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.106122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.106179 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.106199 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.117578 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7038937048d6f77bfed5b0b521c844fe325b30f343970d1b5f654ea93f433aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:39Z\\\",\\\"message\\\":\\\"2026-02-19T12:46:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea\\\\n2026-02-19T12:46:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea to /host/opt/cni/bin/\\\\n2026-02-19T12:46:54Z [verbose] multus-daemon started\\\\n2026-02-19T12:46:54Z [verbose] Readiness Indicator file check\\\\n2026-02-19T12:47:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.131430 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.150663 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.171605 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.188410 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.207225 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.210335 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.210375 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.210389 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.210411 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.210428 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.238731 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f97895ecec4af35cb97feaeedbcd59423761376106079a5980af39b81959972c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:20Z\\\",\\\"message\\\":\\\"flector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0219 12:47:20.369449 6439 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0219 12:47:20.369490 6439 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0219 12:47:20.369563 6439 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0219 12:47:20.369568 6439 handler.go:208] Removed *v1.Node event handler 2\\\\nI0219 12:47:20.369596 6439 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0219 12:47:20.369598 6439 handler.go:208] Removed *v1.Node event handler 7\\\\nI0219 12:47:20.369619 6439 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0219 12:47:20.369658 6439 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0219 12:47:20.369730 6439 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0219 12:47:20.369750 6439 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0219 12:47:20.369762 6439 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0219 12:47:20.369787 6439 factory.go:656] Stopping watch factory\\\\nI0219 12:47:20.369807 6439 ovnkube.go:599] Stopped ovnkube\\\\nI0219 12:47:20.369845 6439 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0219 12:47:20.369845 6439 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0219 12:47:46.479020 6828 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-c2lxp in node crc\\\\nI0219 12:47:46.480470 6828 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-c2lxp after 0 failed attempt(s)\\\\nI0219 12:47:46.480483 6828 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-c2lxp\\\\nF0219 12:47:46.480578 6828 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.260372 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.284198 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.295325 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:21:32.490962759 +0000 UTC Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.302204 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.313745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.313852 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.313878 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.313941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.313960 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.444684 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.444749 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.444766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.444791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.444810 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.548106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.548178 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.548204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.548237 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.548261 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.651797 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.651932 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.651952 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.651976 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.652025 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.755779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.755868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.755916 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.755941 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.755958 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.859577 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.859645 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.859667 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.859692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.859709 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.951563 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/3.log" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.956471 4833 scope.go:117] "RemoveContainer" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" Feb 19 12:47:47 crc kubenswrapper[4833]: E0219 12:47:47.956756 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.962132 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.962182 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.962198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.962225 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.962242 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:47Z","lastTransitionTime":"2026-02-19T12:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.977678 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:47 crc kubenswrapper[4833]: I0219 12:47:47.997886 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dfd21a-8b34-4a6f-8b53-f220425fb369\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe0c469c1d92b8cc814628a16596c31a9dae61fdc7820423d94a8a25c622c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6facdd3120ee01990d31602f404ba1bfdd78ebfe3de3e0208a1f1058bede3472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef38f10daeecfe4798979b037586cb7553da6ed81347ba4a1c2f1fa6671e269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:47Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.020577 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7038937048d6f77bfed5b0b521c844fe325b30f343970d1b5f654ea93f433aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:39Z\\\",\\\"message\\\":\\\"2026-02-19T12:46:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea\\\\n2026-02-19T12:46:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea to /host/opt/cni/bin/\\\\n2026-02-19T12:46:54Z [verbose] multus-daemon started\\\\n2026-02-19T12:46:54Z [verbose] Readiness Indicator file check\\\\n2026-02-19T12:47:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.053647 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0219 12:47:46.479020 6828 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-c2lxp in node crc\\\\nI0219 12:47:46.480470 6828 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-c2lxp after 0 failed attempt(s)\\\\nI0219 12:47:46.480483 6828 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-c2lxp\\\\nF0219 12:47:46.480578 6828 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.064910 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.064971 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.064988 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.065015 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.065033 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:48Z","lastTransitionTime":"2026-02-19T12:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.071904 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.093196 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.112782 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.130157 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.146564 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.161799 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.167339 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.167378 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.167387 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.167401 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.167412 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:48Z","lastTransitionTime":"2026-02-19T12:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.185299 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.205209 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.225544 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.246994 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.266073 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.271231 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.271291 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.271311 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.271353 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.271372 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:48Z","lastTransitionTime":"2026-02-19T12:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.288733 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.295641 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 16:56:20.190248955 +0000 UTC Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.304620 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:48Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.314089 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.314209 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.314334 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.314216 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:48 crc kubenswrapper[4833]: E0219 12:47:48.314409 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:48 crc kubenswrapper[4833]: E0219 12:47:48.314597 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:48 crc kubenswrapper[4833]: E0219 12:47:48.314732 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:48 crc kubenswrapper[4833]: E0219 12:47:48.314816 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.376538 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.376607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.376630 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.376687 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.376711 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:48Z","lastTransitionTime":"2026-02-19T12:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.479743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.479823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.479845 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.479872 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.479892 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:48Z","lastTransitionTime":"2026-02-19T12:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.583124 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.583176 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.583198 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.583229 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.583250 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:48Z","lastTransitionTime":"2026-02-19T12:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.686106 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.686171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.686189 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.686214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.686238 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:48Z","lastTransitionTime":"2026-02-19T12:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.789477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.789585 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.789619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.789649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.789671 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:48Z","lastTransitionTime":"2026-02-19T12:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.892564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.892692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.892721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.892756 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.892782 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:48Z","lastTransitionTime":"2026-02-19T12:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.996714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.996791 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.996812 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.996838 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:48 crc kubenswrapper[4833]: I0219 12:47:48.996856 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:48Z","lastTransitionTime":"2026-02-19T12:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.099984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.100046 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.100066 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.100091 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.100109 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:49Z","lastTransitionTime":"2026-02-19T12:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.202578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.202659 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.202692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.202721 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.202742 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:49Z","lastTransitionTime":"2026-02-19T12:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.296088 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 17:11:46.889469426 +0000 UTC Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.305619 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.305675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.305692 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.305716 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.305732 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:49Z","lastTransitionTime":"2026-02-19T12:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.408729 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.408785 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.408802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.408825 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.408840 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:49Z","lastTransitionTime":"2026-02-19T12:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.512131 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.512190 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.512208 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.512230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.512247 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:49Z","lastTransitionTime":"2026-02-19T12:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.615393 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.615471 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.615489 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.615560 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.615582 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:49Z","lastTransitionTime":"2026-02-19T12:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.718534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.718606 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.718627 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.718654 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.718672 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:49Z","lastTransitionTime":"2026-02-19T12:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.821479 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.821583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.821601 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.821625 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.821644 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:49Z","lastTransitionTime":"2026-02-19T12:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.924959 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.925047 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.925072 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.925102 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:49 crc kubenswrapper[4833]: I0219 12:47:49.925116 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:49Z","lastTransitionTime":"2026-02-19T12:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.027833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.027904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.027921 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.027946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.027963 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:50Z","lastTransitionTime":"2026-02-19T12:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.130980 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.131044 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.131068 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.131095 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.131116 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:50Z","lastTransitionTime":"2026-02-19T12:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.233799 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.233864 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.233880 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.233904 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.233922 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:50Z","lastTransitionTime":"2026-02-19T12:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.296268 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 10:59:59.79013882 +0000 UTC Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.314071 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.314205 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:50 crc kubenswrapper[4833]: E0219 12:47:50.314274 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.314421 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:50 crc kubenswrapper[4833]: E0219 12:47:50.314421 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.314535 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:50 crc kubenswrapper[4833]: E0219 12:47:50.314886 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:50 crc kubenswrapper[4833]: E0219 12:47:50.315079 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.330017 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.337419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.337725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.337953 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.338137 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.338304 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:50Z","lastTransitionTime":"2026-02-19T12:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.339192 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p75n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e1957a0-ea7d-4831-ae8f-630a9529ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7038937048d6f77bfed5b0b521c844fe325b30f343970d1b5f654ea93f433aae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:39Z\\\",\\\"message\\\":\\\"2026-02-19T12:46:53+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea\\\\n2026-02-19T12:46:53+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c3603840-e1be-4fee-a770-a84d70dda6ea to /host/opt/cni/bin/\\\\n2026-02-19T12:46:54Z [verbose] multus-daemon started\\\\n2026-02-19T12:46:54Z [verbose] Readiness Indicator file check\\\\n2026-02-19T12:47:39Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-btskh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p75n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.358591 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"91b543fb-1f33-4706-9ce7-dff08bf7b82f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08afefdd71557d6c17668ed12d83aa416dcb83414ff4c8d741df835d2cdfdf5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3caf63dc183f47734bc7ada20dc729d98465449779a981f21080c0f23ef7e7d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://990bc2af578ada7fc2630b0d8ff77ab12bc15e0883ae62d8f64598944f6255f8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.377320 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dfd21a-8b34-4a6f-8b53-f220425fb369\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fe0c469c1d92b8cc814628a16596c31a9dae61fdc7820423d94a8a25c622c02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6facdd3120ee01990d31602f404ba1bfdd78ebfe3de3e0208a1f1058bede3472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fef38f10daeecfe4798979b037586cb7553da6ed81347ba4a1c2f1fa6671e269\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2c7691cd25733ea26644ce2b86eb92587cf9c0545fbf32d8a39203f8ef305709\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.406399 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.438429 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6dafae6a-984e-4e99-90ca-76937bfcc3d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-19T12:47:46Z\\\",\\\"message\\\":\\\"tring{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-operator-lifecycle-manager/olm-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/olm-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.168\\\\\\\", Port:8443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0219 12:47:46.479020 6828 ovn.go:134] Ensuring zone local for Pod openshift-machine-config-operator/machine-config-daemon-c2lxp in node crc\\\\nI0219 12:47:46.480470 6828 obj_retry.go:386] Retry successful for *v1.Pod openshift-machine-config-operator/machine-config-daemon-c2lxp after 0 failed attempt(s)\\\\nI0219 12:47:46.480483 6828 default_network_controller.go:776] Recording success event on pod openshift-machine-config-operator/machine-config-daemon-c2lxp\\\\nF0219 12:47:46.480578 6828 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:47:45Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-chksh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pwqj9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.440472 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.440707 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.442158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.442612 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.442812 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:50Z","lastTransitionTime":"2026-02-19T12:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.456733 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-clgkm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4177542e-89ba-436d-bc9d-e792f2da656c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5nk85\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-clgkm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.478584 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c103b3-9730-4f0a-b308-8deb1a89ec5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T12:46:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0219 12:46:44.006825 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0219 12:46:44.009905 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2000707540/tls.crt::/tmp/serving-cert-2000707540/tls.key\\\\\\\"\\\\nI0219 12:46:49.452249 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 12:46:49.466902 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 12:46:49.466926 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 12:46:49.466954 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 12:46:49.466960 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 12:46:49.480829 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 12:46:49.480855 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480859 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 12:46:49.480863 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 12:46:49.480866 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 12:46:49.480869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 12:46:49.480872 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 12:46:49.481078 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0219 12:46:49.484893 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:30Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.496447 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.510627 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1b032478ce1fd684af32ce6495ec3256986d402cc0694ef33a31b359412196\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.525415 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a396d626-cea2-42cf-84c5-943b0b85a92b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://974e8adb7f634f05d280c54259846c41c2b25188cec050cad707c2cbff3cb79e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m2vxj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-c2lxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.539122 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.545534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.545565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.545577 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.545593 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.545605 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:50Z","lastTransitionTime":"2026-02-19T12:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.563324 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-flbc2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"407965f4-206f-457e-9a8b-90948a537d06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24ae9deccb64ee85ac4adb206a70f8e11a3143dbb1939e7fc6e06f73639de860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad9a1968b045b7902a059d7002a3e6bf49b4d3a9945a17196f5aa9b0ec67601e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1103597ce94efd7de855401a6b1cd8503010993b1e426c1d4221d5987e532072\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ec8544bffaf2f7de437f26ddb20e02f38797f50e697f58238dfc1bb2f502ecf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://54511158f04e647a0802b850524162f42a3f6580fa4eca0337b60b81b4fd0d15\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d6b42f00a76fe1eac629b1faa46bf123985deda1f67a4170d2b0a6c51361ee5b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e0b82c1f561df02ca48a3d5c5e5e0afa8b7fd626d756d53690f9eb3691e5def\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T12:46:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T12:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gmx9d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:52Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-flbc2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.578141 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-qhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbfd8074-c921-4100-a633-232f33b775b3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e82b73c77669166b2181b50da7d3d49dcd5463ce3338237b2bc03f0a207a88f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n7hlg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:54Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-qhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.590342 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"98bf14d5-7b12-4a96-b73b-3c8467eda471\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc3ae8695e69f07a2d2c47573b6295098b4c09e49549f32625f545aa3abe6cbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf8f07c6c232496c55aa498538c41bf18a5eb400cf7fa98145e9f88c129444e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnfnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:47:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6g2h7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.608019 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1681eb9d545d0c38050e0563d94eec3e65d7c3f8ba5a56030574c197584465ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.626648 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a66871a3f0a01f7c467bb5e2785240674fa5a4a0ec6dce1ae4b0680a9365a859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f65da34cf52a6656cde60f663baa78d6fe7ceded387016b82022df34c6217079\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.641607 4833 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-p542x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a9544854-47a0-4750-b6a0-1f4a2bb1955a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T12:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef6048482f4ba273699273d9de3cd8348c6422ff6c8d654049ea8875571d9792\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T12:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67hx2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T12:46:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-p542x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-19T12:47:50Z is after 2025-08-24T17:21:41Z" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.648322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.648382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.648394 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.648439 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.648452 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:50Z","lastTransitionTime":"2026-02-19T12:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.751211 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.751836 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.751994 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.752154 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.752349 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:50Z","lastTransitionTime":"2026-02-19T12:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.855467 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.855563 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.855589 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.855618 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.855640 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:50Z","lastTransitionTime":"2026-02-19T12:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.959122 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.959611 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.959853 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.960073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:50 crc kubenswrapper[4833]: I0219 12:47:50.960317 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:50Z","lastTransitionTime":"2026-02-19T12:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.063943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.064012 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.064030 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.064056 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.064074 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:51Z","lastTransitionTime":"2026-02-19T12:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.166819 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.166891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.166917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.166947 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.166969 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:51Z","lastTransitionTime":"2026-02-19T12:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.269434 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.269533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.269551 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.269576 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.269593 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:51Z","lastTransitionTime":"2026-02-19T12:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.297097 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:22:48.968120548 +0000 UTC Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.372737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.372815 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.372833 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.372857 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.372876 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:51Z","lastTransitionTime":"2026-02-19T12:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.476668 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.476714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.476725 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.476744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.476758 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:51Z","lastTransitionTime":"2026-02-19T12:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.579898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.579961 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.579983 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.580013 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.580035 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:51Z","lastTransitionTime":"2026-02-19T12:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.682939 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.683000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.683017 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.683040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.683057 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:51Z","lastTransitionTime":"2026-02-19T12:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.786415 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.786477 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.786521 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.786547 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.786564 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:51Z","lastTransitionTime":"2026-02-19T12:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.889803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.889874 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.889897 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.889927 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.889947 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:51Z","lastTransitionTime":"2026-02-19T12:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.992779 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.992848 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.992868 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.992894 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:51 crc kubenswrapper[4833]: I0219 12:47:51.992911 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:51Z","lastTransitionTime":"2026-02-19T12:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.095276 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.095338 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.095354 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.095383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.095400 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:52Z","lastTransitionTime":"2026-02-19T12:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.198260 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.198332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.198357 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.198385 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.198409 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:52Z","lastTransitionTime":"2026-02-19T12:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.298168 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 07:22:14.770575037 +0000 UTC Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.301552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.301620 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.301639 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.301663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.301680 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:52Z","lastTransitionTime":"2026-02-19T12:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.314142 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.314171 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.314251 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:52 crc kubenswrapper[4833]: E0219 12:47:52.314437 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.314487 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:52 crc kubenswrapper[4833]: E0219 12:47:52.314649 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:52 crc kubenswrapper[4833]: E0219 12:47:52.314773 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:52 crc kubenswrapper[4833]: E0219 12:47:52.314879 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.404719 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.404777 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.404798 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.404831 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.404852 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:52Z","lastTransitionTime":"2026-02-19T12:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.508602 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.508691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.508714 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.508743 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.508764 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:52Z","lastTransitionTime":"2026-02-19T12:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.611579 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.611633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.611652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.611674 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.611691 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:52Z","lastTransitionTime":"2026-02-19T12:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.714355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.714419 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.714436 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.714460 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.714476 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:52Z","lastTransitionTime":"2026-02-19T12:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.817957 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.818021 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.818040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.818063 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.818079 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:52Z","lastTransitionTime":"2026-02-19T12:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.920704 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.920786 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.920803 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.920823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:52 crc kubenswrapper[4833]: I0219 12:47:52.920835 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:52Z","lastTransitionTime":"2026-02-19T12:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.023223 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.023287 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.023305 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.023332 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.023351 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:53Z","lastTransitionTime":"2026-02-19T12:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.126290 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.126343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.126360 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.126382 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.126398 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:53Z","lastTransitionTime":"2026-02-19T12:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.228991 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.229042 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.229055 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.229073 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.229085 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:53Z","lastTransitionTime":"2026-02-19T12:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.298909 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 09:13:19.424930446 +0000 UTC Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.331946 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.332014 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.332029 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.332049 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.332063 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:53Z","lastTransitionTime":"2026-02-19T12:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.435158 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.435242 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.435258 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.435284 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.435303 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:53Z","lastTransitionTime":"2026-02-19T12:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.538247 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.538326 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.538349 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.538380 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.538399 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:53Z","lastTransitionTime":"2026-02-19T12:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.641424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.641537 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.641565 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.641594 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.641615 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:53Z","lastTransitionTime":"2026-02-19T12:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.744698 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.744761 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.744778 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.744800 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.744817 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:53Z","lastTransitionTime":"2026-02-19T12:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.847715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.847782 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.847802 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.847827 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.847846 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:53Z","lastTransitionTime":"2026-02-19T12:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.951008 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.951112 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.951132 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.951157 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:53 crc kubenswrapper[4833]: I0219 12:47:53.951176 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:53Z","lastTransitionTime":"2026-02-19T12:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.053814 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.053881 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.053898 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.053922 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.053938 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:54Z","lastTransitionTime":"2026-02-19T12:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.156943 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.156999 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.157016 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.157040 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.157058 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:54Z","lastTransitionTime":"2026-02-19T12:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.260254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.260325 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.260343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.260366 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.260382 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:54Z","lastTransitionTime":"2026-02-19T12:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.299592 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:23:40.832743971 +0000 UTC Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.314930 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.315005 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.315144 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.315165 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.315359 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.314960 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.315726 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.315896 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.333391 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.333597 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.333565125 +0000 UTC m=+148.729083953 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.333685 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.333739 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.333774 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.333819 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.333909 4833 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.333914 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.333949 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.333959 4833 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.333970 4833 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.333963 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.333949825 +0000 UTC m=+148.729468633 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.334050 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.334032167 +0000 UTC m=+148.729550975 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.334073 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.334062378 +0000 UTC m=+148.729581176 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.334114 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.334138 4833 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.334156 4833 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:54 crc kubenswrapper[4833]: E0219 12:47:54.334213 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.334188081 +0000 UTC m=+148.729706879 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.363577 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.363628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.363649 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.363675 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.363695 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:54Z","lastTransitionTime":"2026-02-19T12:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.466421 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.466523 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.466550 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.466578 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.466600 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:54Z","lastTransitionTime":"2026-02-19T12:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.569669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.569744 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.569766 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.569793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.569816 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:54Z","lastTransitionTime":"2026-02-19T12:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.671924 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.671973 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.671984 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.672001 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.672014 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:54Z","lastTransitionTime":"2026-02-19T12:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.774693 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.774745 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.774763 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.774784 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.774798 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:54Z","lastTransitionTime":"2026-02-19T12:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.878268 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.878355 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.878383 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.878413 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.878436 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:54Z","lastTransitionTime":"2026-02-19T12:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.980629 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.980691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.980708 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.980731 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:54 crc kubenswrapper[4833]: I0219 12:47:54.980748 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:54Z","lastTransitionTime":"2026-02-19T12:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.084528 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.084607 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.084633 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.084663 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.084685 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:55Z","lastTransitionTime":"2026-02-19T12:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.187128 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.187208 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.187230 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.187263 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.187288 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:55Z","lastTransitionTime":"2026-02-19T12:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.290573 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.290646 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.290665 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.290691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.290710 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:55Z","lastTransitionTime":"2026-02-19T12:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.300486 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:38:34.830881192 +0000 UTC Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.394255 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.394306 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.394322 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.394343 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.394358 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:55Z","lastTransitionTime":"2026-02-19T12:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.497408 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.497533 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.497557 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.497583 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.497687 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:55Z","lastTransitionTime":"2026-02-19T12:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.601204 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.601256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.601273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.601298 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.601315 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:55Z","lastTransitionTime":"2026-02-19T12:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.704194 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.704256 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.704273 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.704296 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.704316 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:55Z","lastTransitionTime":"2026-02-19T12:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.807757 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.807823 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.807840 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.807865 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.807883 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:55Z","lastTransitionTime":"2026-02-19T12:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.910452 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.910534 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.910552 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.910575 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:55 crc kubenswrapper[4833]: I0219 12:47:55.910592 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:55Z","lastTransitionTime":"2026-02-19T12:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.013891 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.013955 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.013975 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.014000 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.014018 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:56Z","lastTransitionTime":"2026-02-19T12:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.117396 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.117453 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.117474 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.117539 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.117566 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:56Z","lastTransitionTime":"2026-02-19T12:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.220613 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.220685 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.220709 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.220737 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.220759 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:56Z","lastTransitionTime":"2026-02-19T12:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.300954 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:35:19.133423174 +0000 UTC Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.314491 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.314564 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:56 crc kubenswrapper[4833]: E0219 12:47:56.315076 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.314674 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.314627 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:56 crc kubenswrapper[4833]: E0219 12:47:56.315413 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:56 crc kubenswrapper[4833]: E0219 12:47:56.315458 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:56 crc kubenswrapper[4833]: E0219 12:47:56.315095 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.323254 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.323461 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.323678 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.323917 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.324109 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:56Z","lastTransitionTime":"2026-02-19T12:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.426828 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.427148 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.427304 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.427442 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.427605 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:56Z","lastTransitionTime":"2026-02-19T12:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.530793 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.531217 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.531424 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.531653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.531805 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:56Z","lastTransitionTime":"2026-02-19T12:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.634653 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.634701 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.634720 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.634752 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.634777 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:56Z","lastTransitionTime":"2026-02-19T12:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.737564 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.737628 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.737652 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.737688 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.737714 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:56Z","lastTransitionTime":"2026-02-19T12:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.840616 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.840691 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.840715 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.840739 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.840756 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:56Z","lastTransitionTime":"2026-02-19T12:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.943088 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.943160 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.943184 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.943214 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:56 crc kubenswrapper[4833]: I0219 12:47:56.943246 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:56Z","lastTransitionTime":"2026-02-19T12:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.046083 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.046153 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.046171 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.046197 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.046214 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:57Z","lastTransitionTime":"2026-02-19T12:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.149086 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.149146 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.149163 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.149186 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.149202 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:57Z","lastTransitionTime":"2026-02-19T12:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.251738 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.251815 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.251839 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.251871 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.251938 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:57Z","lastTransitionTime":"2026-02-19T12:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.292456 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.292592 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.292669 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.292748 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.292823 4833 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T12:47:57Z","lastTransitionTime":"2026-02-19T12:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.301949 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 22:26:20.753528056 +0000 UTC Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.368161 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm"] Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.369064 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.371675 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.371721 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.372413 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.372696 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.430018 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9p75n" podStartSLOduration=66.429999359 podStartE2EDuration="1m6.429999359s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:47:57.402017972 +0000 UTC m=+87.797536780" watchObservedRunningTime="2026-02-19 12:47:57.429999359 +0000 UTC m=+87.825518137" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.453114 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=62.453088845 podStartE2EDuration="1m2.453088845s" podCreationTimestamp="2026-02-19 12:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:47:57.432049911 +0000 UTC m=+87.827568719" watchObservedRunningTime="2026-02-19 12:47:57.453088845 +0000 UTC m=+87.848607653" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.470979 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.471109 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.471156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.471188 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.471221 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.472099 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.472071148 podStartE2EDuration="35.472071148s" podCreationTimestamp="2026-02-19 12:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:47:57.453640689 +0000 UTC m=+87.849159487" watchObservedRunningTime="2026-02-19 12:47:57.472071148 +0000 UTC m=+87.867589946" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.551080 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=67.551052977 podStartE2EDuration="1m7.551052977s" podCreationTimestamp="2026-02-19 12:46:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:47:57.550815471 +0000 UTC m=+87.946334309" watchObservedRunningTime="2026-02-19 12:47:57.551052977 +0000 UTC m=+87.946571785" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.572125 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.572230 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.572333 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.572364 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.572365 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.572461 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.572598 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.573815 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-service-ca\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.585666 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.619141 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6f0df96f-a8c4-4fcb-b656-0fb1f42898bb-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-266dm\" (UID: \"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.642906 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.6428776769999995 podStartE2EDuration="7.642877677s" podCreationTimestamp="2026-02-19 12:47:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:47:57.64220464 +0000 UTC m=+88.037723488" watchObservedRunningTime="2026-02-19 12:47:57.642877677 +0000 UTC m=+88.038396485" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.643456 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podStartSLOduration=66.643444741 podStartE2EDuration="1m6.643444741s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:47:57.623293728 +0000 UTC m=+88.018812546" watchObservedRunningTime="2026-02-19 12:47:57.643444741 +0000 UTC m=+88.038963549" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.694387 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-flbc2" podStartSLOduration=66.694362 podStartE2EDuration="1m6.694362s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:47:57.689444758 +0000 UTC m=+88.084963576" watchObservedRunningTime="2026-02-19 12:47:57.694362 +0000 UTC m=+88.089880798" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.703529 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.709112 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-qhkjl" podStartSLOduration=66.709086637 podStartE2EDuration="1m6.709086637s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:47:57.708169664 +0000 UTC m=+88.103688502" watchObservedRunningTime="2026-02-19 12:47:57.709086637 +0000 UTC m=+88.104605445" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.725226 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6g2h7" podStartSLOduration=66.725204259 podStartE2EDuration="1m6.725204259s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:47:57.724112332 +0000 UTC m=+88.119631110" watchObservedRunningTime="2026-02-19 12:47:57.725204259 +0000 UTC m=+88.120723047" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.798563 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-p542x" podStartSLOduration=66.798537107 podStartE2EDuration="1m6.798537107s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:47:57.797584614 +0000 UTC m=+88.193103392" watchObservedRunningTime="2026-02-19 12:47:57.798537107 +0000 UTC m=+88.194055925" Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.994385 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" event={"ID":"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb","Type":"ContainerStarted","Data":"0b996885390d45be92ecde96cc1466e6b60d2d397a66df936cd45d48fc2fbecf"} Feb 19 12:47:57 crc kubenswrapper[4833]: I0219 12:47:57.994454 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" event={"ID":"6f0df96f-a8c4-4fcb-b656-0fb1f42898bb","Type":"ContainerStarted","Data":"9c946ff99040fe042a29aa2da37961cd5234c73031f99b78e28815b24458da8e"} Feb 19 12:47:58 crc kubenswrapper[4833]: I0219 12:47:58.014312 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-266dm" podStartSLOduration=67.014290126 podStartE2EDuration="1m7.014290126s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:47:58.013057855 +0000 UTC m=+88.408576653" watchObservedRunningTime="2026-02-19 12:47:58.014290126 +0000 UTC m=+88.409808934" Feb 19 12:47:58 crc kubenswrapper[4833]: I0219 12:47:58.302400 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 06:30:46.533847272 +0000 UTC Feb 19 12:47:58 crc kubenswrapper[4833]: I0219 12:47:58.302544 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 19 12:47:58 crc kubenswrapper[4833]: I0219 12:47:58.313263 4833 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 19 12:47:58 crc kubenswrapper[4833]: I0219 12:47:58.314469 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:47:58 crc kubenswrapper[4833]: I0219 12:47:58.314553 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:47:58 crc kubenswrapper[4833]: I0219 12:47:58.314475 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:47:58 crc kubenswrapper[4833]: I0219 12:47:58.314635 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:47:58 crc kubenswrapper[4833]: E0219 12:47:58.314783 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:47:58 crc kubenswrapper[4833]: E0219 12:47:58.314958 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:47:58 crc kubenswrapper[4833]: E0219 12:47:58.314982 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:47:58 crc kubenswrapper[4833]: E0219 12:47:58.315025 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:47:59 crc kubenswrapper[4833]: I0219 12:47:59.315450 4833 scope.go:117] "RemoveContainer" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" Feb 19 12:47:59 crc kubenswrapper[4833]: E0219 12:47:59.315639 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" Feb 19 12:48:00 crc kubenswrapper[4833]: I0219 12:48:00.314540 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:00 crc kubenswrapper[4833]: I0219 12:48:00.314595 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:00 crc kubenswrapper[4833]: I0219 12:48:00.314730 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:00 crc kubenswrapper[4833]: E0219 12:48:00.316004 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:00 crc kubenswrapper[4833]: I0219 12:48:00.316049 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:00 crc kubenswrapper[4833]: E0219 12:48:00.316385 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:00 crc kubenswrapper[4833]: E0219 12:48:00.316291 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:00 crc kubenswrapper[4833]: E0219 12:48:00.316485 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:02 crc kubenswrapper[4833]: I0219 12:48:02.314158 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:02 crc kubenswrapper[4833]: I0219 12:48:02.314168 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:02 crc kubenswrapper[4833]: I0219 12:48:02.314340 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:02 crc kubenswrapper[4833]: E0219 12:48:02.314471 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:02 crc kubenswrapper[4833]: I0219 12:48:02.314797 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:02 crc kubenswrapper[4833]: E0219 12:48:02.314928 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:02 crc kubenswrapper[4833]: E0219 12:48:02.315168 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:02 crc kubenswrapper[4833]: E0219 12:48:02.315461 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:04 crc kubenswrapper[4833]: I0219 12:48:04.314934 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:04 crc kubenswrapper[4833]: I0219 12:48:04.315068 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:04 crc kubenswrapper[4833]: E0219 12:48:04.315245 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:04 crc kubenswrapper[4833]: I0219 12:48:04.315340 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:04 crc kubenswrapper[4833]: I0219 12:48:04.315339 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:04 crc kubenswrapper[4833]: E0219 12:48:04.315863 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:04 crc kubenswrapper[4833]: E0219 12:48:04.316055 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:04 crc kubenswrapper[4833]: E0219 12:48:04.316125 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:04 crc kubenswrapper[4833]: I0219 12:48:04.338996 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 19 12:48:06 crc kubenswrapper[4833]: I0219 12:48:06.314270 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:06 crc kubenswrapper[4833]: I0219 12:48:06.314270 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:06 crc kubenswrapper[4833]: E0219 12:48:06.314840 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:06 crc kubenswrapper[4833]: I0219 12:48:06.314336 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:06 crc kubenswrapper[4833]: I0219 12:48:06.314336 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:06 crc kubenswrapper[4833]: E0219 12:48:06.315254 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:06 crc kubenswrapper[4833]: E0219 12:48:06.315321 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:06 crc kubenswrapper[4833]: E0219 12:48:06.315404 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:08 crc kubenswrapper[4833]: I0219 12:48:08.314302 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:08 crc kubenswrapper[4833]: E0219 12:48:08.314537 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:08 crc kubenswrapper[4833]: I0219 12:48:08.314841 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:08 crc kubenswrapper[4833]: I0219 12:48:08.314941 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:08 crc kubenswrapper[4833]: E0219 12:48:08.315109 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:08 crc kubenswrapper[4833]: I0219 12:48:08.315166 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:08 crc kubenswrapper[4833]: E0219 12:48:08.315328 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:08 crc kubenswrapper[4833]: E0219 12:48:08.315461 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:10 crc kubenswrapper[4833]: I0219 12:48:10.313667 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:10 crc kubenswrapper[4833]: E0219 12:48:10.313863 4833 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:48:10 crc kubenswrapper[4833]: E0219 12:48:10.313939 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs podName:4177542e-89ba-436d-bc9d-e792f2da656c nodeName:}" failed. No retries permitted until 2026-02-19 12:49:14.313915121 +0000 UTC m=+164.709433899 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs") pod "network-metrics-daemon-clgkm" (UID: "4177542e-89ba-436d-bc9d-e792f2da656c") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 12:48:10 crc kubenswrapper[4833]: I0219 12:48:10.314108 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:10 crc kubenswrapper[4833]: I0219 12:48:10.314114 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:10 crc kubenswrapper[4833]: I0219 12:48:10.314164 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:10 crc kubenswrapper[4833]: E0219 12:48:10.315294 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:10 crc kubenswrapper[4833]: I0219 12:48:10.315333 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:10 crc kubenswrapper[4833]: E0219 12:48:10.315434 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:10 crc kubenswrapper[4833]: E0219 12:48:10.315615 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:10 crc kubenswrapper[4833]: E0219 12:48:10.315656 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:10 crc kubenswrapper[4833]: I0219 12:48:10.356710 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=6.356694417 podStartE2EDuration="6.356694417s" podCreationTimestamp="2026-02-19 12:48:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:10.354751739 +0000 UTC m=+100.750270517" watchObservedRunningTime="2026-02-19 12:48:10.356694417 +0000 UTC m=+100.752213195" Feb 19 12:48:12 crc kubenswrapper[4833]: I0219 12:48:12.314131 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:12 crc kubenswrapper[4833]: E0219 12:48:12.314307 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:12 crc kubenswrapper[4833]: I0219 12:48:12.314433 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:12 crc kubenswrapper[4833]: E0219 12:48:12.314524 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:12 crc kubenswrapper[4833]: I0219 12:48:12.314844 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:12 crc kubenswrapper[4833]: I0219 12:48:12.314845 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:12 crc kubenswrapper[4833]: E0219 12:48:12.315095 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:12 crc kubenswrapper[4833]: E0219 12:48:12.315243 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:13 crc kubenswrapper[4833]: I0219 12:48:13.316458 4833 scope.go:117] "RemoveContainer" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" Feb 19 12:48:13 crc kubenswrapper[4833]: E0219 12:48:13.316820 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pwqj9_openshift-ovn-kubernetes(6dafae6a-984e-4e99-90ca-76937bfcc3d6)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" Feb 19 12:48:14 crc kubenswrapper[4833]: I0219 12:48:14.314872 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:14 crc kubenswrapper[4833]: I0219 12:48:14.314906 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:14 crc kubenswrapper[4833]: I0219 12:48:14.314962 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:14 crc kubenswrapper[4833]: I0219 12:48:14.315043 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:14 crc kubenswrapper[4833]: E0219 12:48:14.315265 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:14 crc kubenswrapper[4833]: E0219 12:48:14.315406 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:14 crc kubenswrapper[4833]: E0219 12:48:14.315807 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:14 crc kubenswrapper[4833]: E0219 12:48:14.316024 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:16 crc kubenswrapper[4833]: I0219 12:48:16.314370 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:16 crc kubenswrapper[4833]: E0219 12:48:16.314770 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:16 crc kubenswrapper[4833]: I0219 12:48:16.314787 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:16 crc kubenswrapper[4833]: I0219 12:48:16.314941 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:16 crc kubenswrapper[4833]: E0219 12:48:16.314991 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:16 crc kubenswrapper[4833]: E0219 12:48:16.315187 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:16 crc kubenswrapper[4833]: I0219 12:48:16.315686 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:16 crc kubenswrapper[4833]: E0219 12:48:16.315824 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:18 crc kubenswrapper[4833]: I0219 12:48:18.314793 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:18 crc kubenswrapper[4833]: I0219 12:48:18.314830 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:18 crc kubenswrapper[4833]: I0219 12:48:18.314878 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:18 crc kubenswrapper[4833]: E0219 12:48:18.314977 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:18 crc kubenswrapper[4833]: I0219 12:48:18.314793 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:18 crc kubenswrapper[4833]: E0219 12:48:18.315167 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:18 crc kubenswrapper[4833]: E0219 12:48:18.315285 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:18 crc kubenswrapper[4833]: E0219 12:48:18.315379 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:20 crc kubenswrapper[4833]: I0219 12:48:20.314677 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:20 crc kubenswrapper[4833]: I0219 12:48:20.316545 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:20 crc kubenswrapper[4833]: I0219 12:48:20.316571 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:20 crc kubenswrapper[4833]: E0219 12:48:20.316743 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:20 crc kubenswrapper[4833]: I0219 12:48:20.316882 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:20 crc kubenswrapper[4833]: E0219 12:48:20.317002 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:20 crc kubenswrapper[4833]: E0219 12:48:20.317083 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:20 crc kubenswrapper[4833]: E0219 12:48:20.317186 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:22 crc kubenswrapper[4833]: I0219 12:48:22.314932 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:22 crc kubenswrapper[4833]: I0219 12:48:22.315135 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:22 crc kubenswrapper[4833]: I0219 12:48:22.315792 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:22 crc kubenswrapper[4833]: I0219 12:48:22.316199 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:22 crc kubenswrapper[4833]: E0219 12:48:22.316428 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:22 crc kubenswrapper[4833]: E0219 12:48:22.316625 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:22 crc kubenswrapper[4833]: E0219 12:48:22.316806 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:22 crc kubenswrapper[4833]: E0219 12:48:22.316976 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:24 crc kubenswrapper[4833]: I0219 12:48:24.315009 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:24 crc kubenswrapper[4833]: I0219 12:48:24.315090 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:24 crc kubenswrapper[4833]: I0219 12:48:24.315016 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:24 crc kubenswrapper[4833]: I0219 12:48:24.315010 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:24 crc kubenswrapper[4833]: E0219 12:48:24.315209 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:24 crc kubenswrapper[4833]: E0219 12:48:24.315367 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:24 crc kubenswrapper[4833]: E0219 12:48:24.315582 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:24 crc kubenswrapper[4833]: E0219 12:48:24.315785 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:26 crc kubenswrapper[4833]: I0219 12:48:26.104311 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p75n_4e1957a0-ea7d-4831-ae8f-630a9529ece1/kube-multus/1.log" Feb 19 12:48:26 crc kubenswrapper[4833]: I0219 12:48:26.105443 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p75n_4e1957a0-ea7d-4831-ae8f-630a9529ece1/kube-multus/0.log" Feb 19 12:48:26 crc kubenswrapper[4833]: I0219 12:48:26.105577 4833 generic.go:334] "Generic (PLEG): container finished" podID="4e1957a0-ea7d-4831-ae8f-630a9529ece1" containerID="7038937048d6f77bfed5b0b521c844fe325b30f343970d1b5f654ea93f433aae" exitCode=1 Feb 19 12:48:26 crc kubenswrapper[4833]: I0219 12:48:26.105633 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p75n" event={"ID":"4e1957a0-ea7d-4831-ae8f-630a9529ece1","Type":"ContainerDied","Data":"7038937048d6f77bfed5b0b521c844fe325b30f343970d1b5f654ea93f433aae"} Feb 19 12:48:26 crc kubenswrapper[4833]: I0219 12:48:26.105690 4833 scope.go:117] "RemoveContainer" containerID="55210d43cd923bac579d5ea47e67af5086cb66cec6609cafaec1ef065b7db83e" Feb 19 12:48:26 crc kubenswrapper[4833]: I0219 12:48:26.106361 4833 scope.go:117] "RemoveContainer" containerID="7038937048d6f77bfed5b0b521c844fe325b30f343970d1b5f654ea93f433aae" Feb 19 12:48:26 crc kubenswrapper[4833]: E0219 12:48:26.106660 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9p75n_openshift-multus(4e1957a0-ea7d-4831-ae8f-630a9529ece1)\"" pod="openshift-multus/multus-9p75n" podUID="4e1957a0-ea7d-4831-ae8f-630a9529ece1" Feb 19 12:48:26 crc kubenswrapper[4833]: I0219 12:48:26.314635 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:26 crc kubenswrapper[4833]: I0219 12:48:26.314682 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:26 crc kubenswrapper[4833]: I0219 12:48:26.314636 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:26 crc kubenswrapper[4833]: I0219 12:48:26.314636 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:26 crc kubenswrapper[4833]: E0219 12:48:26.314840 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:26 crc kubenswrapper[4833]: E0219 12:48:26.314756 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:26 crc kubenswrapper[4833]: E0219 12:48:26.314911 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:26 crc kubenswrapper[4833]: E0219 12:48:26.315156 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:27 crc kubenswrapper[4833]: I0219 12:48:27.111813 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p75n_4e1957a0-ea7d-4831-ae8f-630a9529ece1/kube-multus/1.log" Feb 19 12:48:27 crc kubenswrapper[4833]: I0219 12:48:27.315551 4833 scope.go:117] "RemoveContainer" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" Feb 19 12:48:28 crc kubenswrapper[4833]: I0219 12:48:28.118608 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/3.log" Feb 19 12:48:28 crc kubenswrapper[4833]: I0219 12:48:28.122655 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerStarted","Data":"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503"} Feb 19 12:48:28 crc kubenswrapper[4833]: I0219 12:48:28.123207 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:48:28 crc kubenswrapper[4833]: I0219 12:48:28.179794 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podStartSLOduration=97.179762492 podStartE2EDuration="1m37.179762492s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:28.178675825 +0000 UTC m=+118.574194653" watchObservedRunningTime="2026-02-19 12:48:28.179762492 +0000 UTC m=+118.575281310" Feb 19 12:48:28 crc kubenswrapper[4833]: I0219 12:48:28.313997 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:28 crc kubenswrapper[4833]: I0219 12:48:28.314102 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:28 crc kubenswrapper[4833]: I0219 12:48:28.314195 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:28 crc kubenswrapper[4833]: E0219 12:48:28.314372 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:28 crc kubenswrapper[4833]: I0219 12:48:28.314396 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:28 crc kubenswrapper[4833]: E0219 12:48:28.314646 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:28 crc kubenswrapper[4833]: E0219 12:48:28.314546 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:28 crc kubenswrapper[4833]: E0219 12:48:28.314806 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:28 crc kubenswrapper[4833]: I0219 12:48:28.455350 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-clgkm"] Feb 19 12:48:29 crc kubenswrapper[4833]: I0219 12:48:29.125984 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:29 crc kubenswrapper[4833]: E0219 12:48:29.126159 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:30 crc kubenswrapper[4833]: E0219 12:48:30.255117 4833 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 19 12:48:30 crc kubenswrapper[4833]: I0219 12:48:30.314914 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:30 crc kubenswrapper[4833]: I0219 12:48:30.315001 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:30 crc kubenswrapper[4833]: E0219 12:48:30.316778 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:30 crc kubenswrapper[4833]: I0219 12:48:30.316830 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:30 crc kubenswrapper[4833]: E0219 12:48:30.317012 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:30 crc kubenswrapper[4833]: E0219 12:48:30.317217 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:30 crc kubenswrapper[4833]: E0219 12:48:30.443777 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 12:48:31 crc kubenswrapper[4833]: I0219 12:48:31.313986 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:31 crc kubenswrapper[4833]: E0219 12:48:31.314117 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:32 crc kubenswrapper[4833]: I0219 12:48:32.314242 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:32 crc kubenswrapper[4833]: I0219 12:48:32.314257 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:32 crc kubenswrapper[4833]: E0219 12:48:32.315369 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:32 crc kubenswrapper[4833]: E0219 12:48:32.315555 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:32 crc kubenswrapper[4833]: I0219 12:48:32.314296 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:32 crc kubenswrapper[4833]: E0219 12:48:32.316159 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:33 crc kubenswrapper[4833]: I0219 12:48:33.314300 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:33 crc kubenswrapper[4833]: E0219 12:48:33.315752 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:33 crc kubenswrapper[4833]: I0219 12:48:33.388431 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:48:34 crc kubenswrapper[4833]: I0219 12:48:34.314611 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:34 crc kubenswrapper[4833]: I0219 12:48:34.314656 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:34 crc kubenswrapper[4833]: E0219 12:48:34.314752 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:34 crc kubenswrapper[4833]: E0219 12:48:34.314889 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:34 crc kubenswrapper[4833]: I0219 12:48:34.315561 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:34 crc kubenswrapper[4833]: E0219 12:48:34.315918 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:35 crc kubenswrapper[4833]: I0219 12:48:35.314859 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:35 crc kubenswrapper[4833]: E0219 12:48:35.315079 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:35 crc kubenswrapper[4833]: E0219 12:48:35.445479 4833 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 19 12:48:36 crc kubenswrapper[4833]: I0219 12:48:36.313974 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:36 crc kubenswrapper[4833]: E0219 12:48:36.314152 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:36 crc kubenswrapper[4833]: I0219 12:48:36.314204 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:36 crc kubenswrapper[4833]: E0219 12:48:36.314392 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:36 crc kubenswrapper[4833]: I0219 12:48:36.314710 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:36 crc kubenswrapper[4833]: E0219 12:48:36.314949 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:37 crc kubenswrapper[4833]: I0219 12:48:37.314565 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:37 crc kubenswrapper[4833]: E0219 12:48:37.314897 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:37 crc kubenswrapper[4833]: I0219 12:48:37.315204 4833 scope.go:117] "RemoveContainer" containerID="7038937048d6f77bfed5b0b521c844fe325b30f343970d1b5f654ea93f433aae" Feb 19 12:48:38 crc kubenswrapper[4833]: I0219 12:48:38.164368 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p75n_4e1957a0-ea7d-4831-ae8f-630a9529ece1/kube-multus/1.log" Feb 19 12:48:38 crc kubenswrapper[4833]: I0219 12:48:38.164456 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p75n" event={"ID":"4e1957a0-ea7d-4831-ae8f-630a9529ece1","Type":"ContainerStarted","Data":"02377c5a8c3efb73f777f9530db44ef08fb3c60dd8af6e87a01675e92eead6f8"} Feb 19 12:48:38 crc kubenswrapper[4833]: I0219 12:48:38.314193 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:38 crc kubenswrapper[4833]: I0219 12:48:38.314283 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:38 crc kubenswrapper[4833]: E0219 12:48:38.314402 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:38 crc kubenswrapper[4833]: I0219 12:48:38.314426 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:38 crc kubenswrapper[4833]: E0219 12:48:38.314582 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:38 crc kubenswrapper[4833]: E0219 12:48:38.314641 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:39 crc kubenswrapper[4833]: I0219 12:48:39.314297 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:39 crc kubenswrapper[4833]: E0219 12:48:39.314548 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-clgkm" podUID="4177542e-89ba-436d-bc9d-e792f2da656c" Feb 19 12:48:40 crc kubenswrapper[4833]: I0219 12:48:40.314104 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:40 crc kubenswrapper[4833]: I0219 12:48:40.314263 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:40 crc kubenswrapper[4833]: I0219 12:48:40.314360 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:40 crc kubenswrapper[4833]: E0219 12:48:40.316275 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 19 12:48:40 crc kubenswrapper[4833]: E0219 12:48:40.316573 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 19 12:48:40 crc kubenswrapper[4833]: E0219 12:48:40.316709 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 19 12:48:41 crc kubenswrapper[4833]: I0219 12:48:41.314385 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:48:41 crc kubenswrapper[4833]: I0219 12:48:41.317607 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 12:48:41 crc kubenswrapper[4833]: I0219 12:48:41.318856 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 12:48:42 crc kubenswrapper[4833]: I0219 12:48:42.314288 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:42 crc kubenswrapper[4833]: I0219 12:48:42.314360 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:42 crc kubenswrapper[4833]: I0219 12:48:42.314578 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:42 crc kubenswrapper[4833]: I0219 12:48:42.317833 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 12:48:42 crc kubenswrapper[4833]: I0219 12:48:42.317902 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 12:48:42 crc kubenswrapper[4833]: I0219 12:48:42.318000 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 12:48:42 crc kubenswrapper[4833]: I0219 12:48:42.318006 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.674045 4833 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.725565 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cm4vr"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.726621 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.730293 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.730417 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.732737 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.733061 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.733170 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.733428 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.733661 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.733727 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.734603 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.738107 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lmrs2"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.739176 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.740954 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.742121 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.742440 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.743021 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.743106 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.743030 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.749607 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m5wfx"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.750181 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.751182 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.750198 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.751849 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.751978 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-66dsh"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.752561 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.755218 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.757134 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.758320 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.758664 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.760114 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.760616 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.766364 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.766667 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.767747 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.769453 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.769456 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.777853 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.778172 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.778410 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.780282 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.782048 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.782456 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.782825 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.783348 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.782815 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.782845 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.783173 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.784344 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.784420 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.784541 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.784561 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.784860 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.788778 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.788972 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.789046 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.798677 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.798865 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.799573 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.799791 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.799868 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.800024 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.800123 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.799809 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.800635 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.801048 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.802433 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.803096 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.803386 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hwh5m"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.803884 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hwh5m" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.804728 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.804802 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2kph"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.805188 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.808047 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.808159 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.811439 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.811745 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.811746 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.814757 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z4dsv"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.815194 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.815458 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.815626 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.815838 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.816069 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.816259 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.817583 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhs8n"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.817757 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.817962 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.818111 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.818120 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.818340 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.818437 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.818540 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.818596 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.818672 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.818727 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.819805 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.821620 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.822883 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k855k"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.823414 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.823806 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wqw4k"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.824096 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.824142 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.826575 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.826621 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.826723 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.826822 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.826923 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.826947 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zjv88"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827013 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827063 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827087 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827140 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827198 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827272 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827374 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827630 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827716 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827756 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827862 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.827890 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.839226 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.840433 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.840984 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.841226 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.841324 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.841655 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.841951 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.845884 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.846425 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.846690 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.846835 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.847055 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.847095 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873371 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwgcg\" (UniqueName: \"kubernetes.io/projected/8a863328-15b8-46bc-9ffd-faa97add46ea-kube-api-access-zwgcg\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873419 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873446 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/85474a25-567e-4ef0-be8a-75de8c7d18d9-node-pullsecrets\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873474 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873514 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn6zf\" (UniqueName: \"kubernetes.io/projected/6bcce72d-6a5d-42d2-b7ed-c721057061f6-kube-api-access-cn6zf\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873538 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e58cf141-d5f2-4832-ab5c-067a3674cbb8-config\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873560 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a883fd6-0a70-4705-951f-df5e7b1bb863-serving-cert\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873581 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873600 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-audit-dir\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873619 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m54ps\" (UniqueName: \"kubernetes.io/projected/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-kube-api-access-m54ps\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873639 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/85474a25-567e-4ef0-be8a-75de8c7d18d9-encryption-config\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873660 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e58cf141-d5f2-4832-ab5c-067a3674cbb8-auth-proxy-config\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873681 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-serving-cert\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873701 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e58cf141-d5f2-4832-ab5c-067a3674cbb8-machine-approver-tls\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873736 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-policies\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873757 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873777 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873796 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-encryption-config\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873818 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a863328-15b8-46bc-9ffd-faa97add46ea-config\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873843 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873864 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a883fd6-0a70-4705-951f-df5e7b1bb863-config\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873883 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a883fd6-0a70-4705-951f-df5e7b1bb863-service-ca-bundle\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873901 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85474a25-567e-4ef0-be8a-75de8c7d18d9-audit-dir\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873921 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873950 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-etcd-client\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873976 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-config\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.873995 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874015 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-image-import-ca\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874034 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874056 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874094 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-audit\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874113 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a883fd6-0a70-4705-951f-df5e7b1bb863-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874135 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-audit-policies\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874157 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h4ch\" (UniqueName: \"kubernetes.io/projected/85474a25-567e-4ef0-be8a-75de8c7d18d9-kube-api-access-6h4ch\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874195 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmh4\" (UniqueName: \"kubernetes.io/projected/5a883fd6-0a70-4705-951f-df5e7b1bb863-kube-api-access-wzmh4\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874217 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874235 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a863328-15b8-46bc-9ffd-faa97add46ea-images\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874268 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874299 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-dir\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874321 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/85474a25-567e-4ef0-be8a-75de8c7d18d9-etcd-client\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874342 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874364 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a863328-15b8-46bc-9ffd-faa97add46ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874393 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpb9v\" (UniqueName: \"kubernetes.io/projected/e58cf141-d5f2-4832-ab5c-067a3674cbb8-kube-api-access-rpb9v\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874449 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85474a25-567e-4ef0-be8a-75de8c7d18d9-serving-cert\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874474 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874481 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.874698 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.875886 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.876245 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.876929 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9hhkc"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.877340 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.878014 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.878280 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qcdcz"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.878537 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.878768 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.884462 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.884915 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.885005 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.885483 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.885978 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.886167 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.886193 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zw6vx"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.886654 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.888509 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4bk7q"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.888884 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.890805 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.891216 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-k9bp4"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.891600 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.891721 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.891912 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.893882 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.894287 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.895868 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.896773 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.897859 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.897931 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.898601 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.898694 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.898712 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.899356 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fr925"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.899634 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.899908 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.901413 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.901803 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.903250 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.903943 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.904537 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-shgxh"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.904971 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.905326 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cm4vr"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.906581 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.906928 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.908078 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.908518 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.909363 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.909824 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m5wfx"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.911636 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hwh5m"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.912707 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.913593 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lmrs2"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.914443 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.915334 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z4dsv"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.916341 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.920690 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhs8n"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.922464 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wqw4k"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.923995 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-66dsh"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.926047 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zjv88"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.928611 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.934844 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-86hv6"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.938850 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k855k"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.938938 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.941986 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2kph"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.943824 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.945262 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9hhkc"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.946408 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.947689 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4bk7q"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.948711 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.949689 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.950928 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.952323 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.954105 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.955771 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fr925"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.957085 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.958038 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.959338 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.961652 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.963003 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.963820 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qcdcz"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.965062 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zw6vx"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.966335 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.967473 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.968547 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.968744 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.969706 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-wdshf"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.970301 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wdshf" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.970792 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-86hv6"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.971885 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-shgxh"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.973005 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7vxjx"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.974464 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.974691 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.975635 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wdshf"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.976867 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7vxjx"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977225 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwgcg\" (UniqueName: \"kubernetes.io/projected/8a863328-15b8-46bc-9ffd-faa97add46ea-kube-api-access-zwgcg\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977259 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wxmb\" (UniqueName: \"kubernetes.io/projected/a50be962-42d3-4e0a-bf3d-13e00fd679a2-kube-api-access-4wxmb\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977279 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d665cab8-36f2-4952-b4ca-75f832485488-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lhlvm\" (UID: \"d665cab8-36f2-4952-b4ca-75f832485488\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977319 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/85474a25-567e-4ef0-be8a-75de8c7d18d9-node-pullsecrets\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977337 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977361 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn6zf\" (UniqueName: \"kubernetes.io/projected/6bcce72d-6a5d-42d2-b7ed-c721057061f6-kube-api-access-cn6zf\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977399 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/85474a25-567e-4ef0-be8a-75de8c7d18d9-node-pullsecrets\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977405 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e58cf141-d5f2-4832-ab5c-067a3674cbb8-config\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977487 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a883fd6-0a70-4705-951f-df5e7b1bb863-serving-cert\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977606 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977630 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-audit-dir\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977675 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m54ps\" (UniqueName: \"kubernetes.io/projected/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-kube-api-access-m54ps\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977700 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/85474a25-567e-4ef0-be8a-75de8c7d18d9-encryption-config\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977729 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e58cf141-d5f2-4832-ab5c-067a3674cbb8-auth-proxy-config\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977780 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2kph\" (UID: \"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977809 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-serving-cert\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977851 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e58cf141-d5f2-4832-ab5c-067a3674cbb8-machine-approver-tls\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977882 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2kph\" (UID: \"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977915 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-audit-dir\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977931 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-policies\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.977976 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978036 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978079 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-encryption-config\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978112 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a863328-15b8-46bc-9ffd-faa97add46ea-config\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978161 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978191 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50be962-42d3-4e0a-bf3d-13e00fd679a2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978212 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978245 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85474a25-567e-4ef0-be8a-75de8c7d18d9-audit-dir\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978274 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978297 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a883fd6-0a70-4705-951f-df5e7b1bb863-config\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978342 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a883fd6-0a70-4705-951f-df5e7b1bb863-service-ca-bundle\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978366 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-config\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978386 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978430 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-image-import-ca\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978453 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-etcd-client\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978508 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978537 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnqts\" (UniqueName: \"kubernetes.io/projected/78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0-kube-api-access-bnqts\") pod \"openshift-config-operator-7777fb866f-x2kph\" (UID: \"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978587 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978615 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a50be962-42d3-4e0a-bf3d-13e00fd679a2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-audit\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978733 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a883fd6-0a70-4705-951f-df5e7b1bb863-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978945 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-audit-policies\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978971 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978978 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h4ch\" (UniqueName: \"kubernetes.io/projected/85474a25-567e-4ef0-be8a-75de8c7d18d9-kube-api-access-6h4ch\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzmh4\" (UniqueName: \"kubernetes.io/projected/5a883fd6-0a70-4705-951f-df5e7b1bb863-kube-api-access-wzmh4\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979105 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979136 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a863328-15b8-46bc-9ffd-faa97add46ea-images\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979185 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d665cab8-36f2-4952-b4ca-75f832485488-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lhlvm\" (UID: \"d665cab8-36f2-4952-b4ca-75f832485488\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979229 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979278 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z9cb\" (UniqueName: \"kubernetes.io/projected/d665cab8-36f2-4952-b4ca-75f832485488-kube-api-access-9z9cb\") pod \"openshift-controller-manager-operator-756b6f6bc6-lhlvm\" (UID: \"d665cab8-36f2-4952-b4ca-75f832485488\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979321 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-dir\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979373 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a50be962-42d3-4e0a-bf3d-13e00fd679a2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979386 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979402 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/85474a25-567e-4ef0-be8a-75de8c7d18d9-etcd-client\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979455 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979480 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a863328-15b8-46bc-9ffd-faa97add46ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.983780 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a883fd6-0a70-4705-951f-df5e7b1bb863-config\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.980263 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85474a25-567e-4ef0-be8a-75de8c7d18d9-audit-dir\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.979680 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-config\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.980379 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xjng4"] Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.983890 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.980901 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-audit-policies\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.981292 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-audit\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.981351 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.981517 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.983963 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a883fd6-0a70-4705-951f-df5e7b1bb863-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.980854 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e58cf141-d5f2-4832-ab5c-067a3674cbb8-auth-proxy-config\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.982444 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-image-import-ca\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.982739 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-dir\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.982777 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.982920 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-encryption-config\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.983108 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-policies\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.983219 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-etcd-serving-ca\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.983238 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a883fd6-0a70-4705-951f-df5e7b1bb863-service-ca-bundle\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.983484 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a863328-15b8-46bc-9ffd-faa97add46ea-config\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.983728 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85474a25-567e-4ef0-be8a-75de8c7d18d9-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.984071 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.983795 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpb9v\" (UniqueName: \"kubernetes.io/projected/e58cf141-d5f2-4832-ab5c-067a3674cbb8-kube-api-access-rpb9v\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.982437 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/85474a25-567e-4ef0-be8a-75de8c7d18d9-encryption-config\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.983593 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a863328-15b8-46bc-9ffd-faa97add46ea-images\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.978080 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e58cf141-d5f2-4832-ab5c-067a3674cbb8-config\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.984127 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85474a25-567e-4ef0-be8a-75de8c7d18d9-serving-cert\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.984152 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.984453 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xjng4" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.984986 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.984993 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.985260 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/e58cf141-d5f2-4832-ab5c-067a3674cbb8-machine-approver-tls\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.985282 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-serving-cert\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.985511 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.985783 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/85474a25-567e-4ef0-be8a-75de8c7d18d9-etcd-client\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.985890 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-etcd-client\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.986308 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a883fd6-0a70-4705-951f-df5e7b1bb863-serving-cert\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.986444 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.986851 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.986894 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85474a25-567e-4ef0-be8a-75de8c7d18d9-serving-cert\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.987489 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8a863328-15b8-46bc-9ffd-faa97add46ea-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.988912 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:48 crc kubenswrapper[4833]: I0219 12:48:48.989162 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.009183 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.028098 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.048060 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.068068 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.084895 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2kph\" (UID: \"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.084935 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2kph\" (UID: \"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.084973 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50be962-42d3-4e0a-bf3d-13e00fd679a2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.085000 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnqts\" (UniqueName: \"kubernetes.io/projected/78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0-kube-api-access-bnqts\") pod \"openshift-config-operator-7777fb866f-x2kph\" (UID: \"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.085021 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a50be962-42d3-4e0a-bf3d-13e00fd679a2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.085060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d665cab8-36f2-4952-b4ca-75f832485488-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lhlvm\" (UID: \"d665cab8-36f2-4952-b4ca-75f832485488\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.085093 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z9cb\" (UniqueName: \"kubernetes.io/projected/d665cab8-36f2-4952-b4ca-75f832485488-kube-api-access-9z9cb\") pod \"openshift-controller-manager-operator-756b6f6bc6-lhlvm\" (UID: \"d665cab8-36f2-4952-b4ca-75f832485488\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.085123 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a50be962-42d3-4e0a-bf3d-13e00fd679a2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.085169 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wxmb\" (UniqueName: \"kubernetes.io/projected/a50be962-42d3-4e0a-bf3d-13e00fd679a2-kube-api-access-4wxmb\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.085192 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d665cab8-36f2-4952-b4ca-75f832485488-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lhlvm\" (UID: \"d665cab8-36f2-4952-b4ca-75f832485488\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.085302 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2kph\" (UID: \"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.086063 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d665cab8-36f2-4952-b4ca-75f832485488-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lhlvm\" (UID: \"d665cab8-36f2-4952-b4ca-75f832485488\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.087985 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a50be962-42d3-4e0a-bf3d-13e00fd679a2-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.090020 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.092643 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2kph\" (UID: \"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.096042 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/a50be962-42d3-4e0a-bf3d-13e00fd679a2-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.098852 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d665cab8-36f2-4952-b4ca-75f832485488-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lhlvm\" (UID: \"d665cab8-36f2-4952-b4ca-75f832485488\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.119361 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.129561 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.189556 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.209707 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.230107 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.248604 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.268445 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.298458 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.309230 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.329479 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.349318 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.369746 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.389828 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.409484 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.429100 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.448940 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.469344 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.489360 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.508817 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.528830 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.549323 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.569126 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.599399 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.609658 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.630311 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.649840 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.669559 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.690009 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.709229 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.730015 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.749326 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.769191 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.789371 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.809551 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.829020 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.849852 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.870023 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.890055 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.907305 4833 request.go:700] Waited for 1.015189221s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0 Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.909457 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.929621 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.950525 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.969849 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 12:48:49 crc kubenswrapper[4833]: I0219 12:48:49.988847 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.009664 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.031298 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.049341 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.069236 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.089929 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.108753 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.129577 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.150055 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.169114 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.189005 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.210702 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.229537 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.249217 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.270747 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.289702 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.310133 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.329534 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.348633 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.369246 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.389672 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.410263 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.428631 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.455875 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.470687 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.491443 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.508924 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.529804 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.548875 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.569095 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.589250 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.609209 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.629360 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.648997 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.670171 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.690546 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.709898 4833 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.729255 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.776454 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwgcg\" (UniqueName: \"kubernetes.io/projected/8a863328-15b8-46bc-9ffd-faa97add46ea-kube-api-access-zwgcg\") pod \"machine-api-operator-5694c8668f-lmrs2\" (UID: \"8a863328-15b8-46bc-9ffd-faa97add46ea\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.802476 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn6zf\" (UniqueName: \"kubernetes.io/projected/6bcce72d-6a5d-42d2-b7ed-c721057061f6-kube-api-access-cn6zf\") pod \"oauth-openshift-558db77b4-66dsh\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.821977 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m54ps\" (UniqueName: \"kubernetes.io/projected/6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6-kube-api-access-m54ps\") pod \"apiserver-7bbb656c7d-krb8b\" (UID: \"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.844406 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h4ch\" (UniqueName: \"kubernetes.io/projected/85474a25-567e-4ef0-be8a-75de8c7d18d9-kube-api-access-6h4ch\") pod \"apiserver-76f77b778f-cm4vr\" (UID: \"85474a25-567e-4ef0-be8a-75de8c7d18d9\") " pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.851315 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.857090 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzmh4\" (UniqueName: \"kubernetes.io/projected/5a883fd6-0a70-4705-951f-df5e7b1bb863-kube-api-access-wzmh4\") pod \"authentication-operator-69f744f599-m5wfx\" (UID: \"5a883fd6-0a70-4705-951f-df5e7b1bb863\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.860470 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.870455 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.889689 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.907375 4833 request.go:700] Waited for 1.922279585s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.923953 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.938276 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpb9v\" (UniqueName: \"kubernetes.io/projected/e58cf141-d5f2-4832-ab5c-067a3674cbb8-kube-api-access-rpb9v\") pod \"machine-approver-56656f9798-ntq2z\" (UID: \"e58cf141-d5f2-4832-ab5c-067a3674cbb8\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.940552 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.957033 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a50be962-42d3-4e0a-bf3d-13e00fd679a2-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.978526 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z9cb\" (UniqueName: \"kubernetes.io/projected/d665cab8-36f2-4952-b4ca-75f832485488-kube-api-access-9z9cb\") pod \"openshift-controller-manager-operator-756b6f6bc6-lhlvm\" (UID: \"d665cab8-36f2-4952-b4ca-75f832485488\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" Feb 19 12:48:50 crc kubenswrapper[4833]: I0219 12:48:50.988738 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.004263 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wxmb\" (UniqueName: \"kubernetes.io/projected/a50be962-42d3-4e0a-bf3d-13e00fd679a2-kube-api-access-4wxmb\") pod \"cluster-image-registry-operator-dc59b4c8b-5w7fq\" (UID: \"a50be962-42d3-4e0a-bf3d-13e00fd679a2\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.018097 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnqts\" (UniqueName: \"kubernetes.io/projected/78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0-kube-api-access-bnqts\") pod \"openshift-config-operator-7777fb866f-x2kph\" (UID: \"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.050725 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.051588 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.051916 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.111889 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dfc152d3-9326-4602-8b02-c9fbc8f73199-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.111927 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz727\" (UniqueName: \"kubernetes.io/projected/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-kube-api-access-fz727\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.111950 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f12eb2-abac-4859-870d-72555b13cda8-config\") pod \"kube-apiserver-operator-766d6c64bb-mcmtw\" (UID: \"98f12eb2-abac-4859-870d-72555b13cda8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112012 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-client-ca\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112044 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmfqc\" (UniqueName: \"kubernetes.io/projected/b06f1db6-3e1a-4db7-ad72-588f9900223a-kube-api-access-xmfqc\") pod \"openshift-apiserver-operator-796bbdcf4f-nwx99\" (UID: \"b06f1db6-3e1a-4db7-ad72-588f9900223a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-serving-cert\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112096 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-trusted-ca-bundle\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112137 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-etcd-service-ca\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112157 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-config\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112184 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06f1db6-3e1a-4db7-ad72-588f9900223a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nwx99\" (UID: \"b06f1db6-3e1a-4db7-ad72-588f9900223a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112228 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/749453c5-3458-41d2-b1ab-aca8e018cfd5-metrics-tls\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112256 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr9qn\" (UniqueName: \"kubernetes.io/projected/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-kube-api-access-qr9qn\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112279 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-certificates\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112301 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-serving-cert\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112322 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/749453c5-3458-41d2-b1ab-aca8e018cfd5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112361 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-oauth-serving-cert\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112429 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-etcd-ca\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112453 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-config\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112526 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzzjp\" (UniqueName: \"kubernetes.io/projected/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-kube-api-access-tzzjp\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112548 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0b921-253c-44e3-8abd-616a4c22825c-serving-cert\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112567 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98f12eb2-abac-4859-870d-72555b13cda8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mcmtw\" (UID: \"98f12eb2-abac-4859-870d-72555b13cda8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112606 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4v7n\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-kube-api-access-c4v7n\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112679 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-config\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112711 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbmmc\" (UniqueName: \"kubernetes.io/projected/17e0b921-253c-44e3-8abd-616a4c22825c-kube-api-access-gbmmc\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112767 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfswp\" (UniqueName: \"kubernetes.io/projected/749453c5-3458-41d2-b1ab-aca8e018cfd5-kube-api-access-sfswp\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112805 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-config\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112828 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgdvm\" (UniqueName: \"kubernetes.io/projected/27e9c527-e726-412a-ac42-d8b8974f136f-kube-api-access-dgdvm\") pod \"cluster-samples-operator-665b6dd947-qdwvj\" (UID: \"27e9c527-e726-412a-ac42-d8b8974f136f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112851 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-trusted-ca\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112875 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-tls\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112915 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-oauth-config\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.112944 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-etcd-client\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.113009 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b06f1db6-3e1a-4db7-ad72-588f9900223a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nwx99\" (UID: \"b06f1db6-3e1a-4db7-ad72-588f9900223a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.113078 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dfc152d3-9326-4602-8b02-c9fbc8f73199-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.113106 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-bound-sa-token\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.113193 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.113239 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27e9c527-e726-412a-ac42-d8b8974f136f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdwvj\" (UID: \"27e9c527-e726-412a-ac42-d8b8974f136f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.113280 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/749453c5-3458-41d2-b1ab-aca8e018cfd5-trusted-ca\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.113296 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-serving-cert\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.113323 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-trusted-ca\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.113376 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5pwt\" (UniqueName: \"kubernetes.io/projected/7792e427-5573-4b37-858e-3c40b4f37505-kube-api-access-z5pwt\") pod \"downloads-7954f5f757-hwh5m\" (UID: \"7792e427-5573-4b37-858e-3c40b4f37505\") " pod="openshift-console/downloads-7954f5f757-hwh5m" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.113401 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-service-ca\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.113426 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f12eb2-abac-4859-870d-72555b13cda8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mcmtw\" (UID: \"98f12eb2-abac-4859-870d-72555b13cda8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.113454 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:51.613439865 +0000 UTC m=+142.008958633 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.141422 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lmrs2"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.147846 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.189899 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.211617 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-m5wfx"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.215857 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.215974 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0b921-253c-44e3-8abd-616a4c22825c-serving-cert\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.215999 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/119b17a5-9014-4cea-b2cc-32e410c88465-webhook-cert\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216017 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45708559-b521-4b8d-a745-12119e61a8cb-serving-cert\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216034 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/88c5b880-561a-4961-91be-bc1ee9bdd96b-default-certificate\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216051 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-config\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216065 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbmmc\" (UniqueName: \"kubernetes.io/projected/17e0b921-253c-44e3-8abd-616a4c22825c-kube-api-access-gbmmc\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216089 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-877dl\" (UniqueName: \"kubernetes.io/projected/45708559-b521-4b8d-a745-12119e61a8cb-kube-api-access-877dl\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216112 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87c5cf64-6aee-4660-9847-5161a05a0410-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-shgxh\" (UID: \"87c5cf64-6aee-4660-9847-5161a05a0410\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216127 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/360a44b7-5f42-42d7-918c-226761dbbd2c-images\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216178 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c314e60b-4099-42e8-9eff-e3ef54025cc3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bpx5m\" (UID: \"c314e60b-4099-42e8-9eff-e3ef54025cc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216195 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvrh9\" (UniqueName: \"kubernetes.io/projected/c314e60b-4099-42e8-9eff-e3ef54025cc3-kube-api-access-zvrh9\") pod \"package-server-manager-789f6589d5-bpx5m\" (UID: \"c314e60b-4099-42e8-9eff-e3ef54025cc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216214 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-tls\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216228 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-etcd-client\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.216283 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:51.716255678 +0000 UTC m=+142.111774516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216333 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc-profile-collector-cert\") pod \"catalog-operator-68c6474976-mzk7x\" (UID: \"b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216369 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-bound-sa-token\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216394 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c9bk\" (UniqueName: \"kubernetes.io/projected/408486ff-077c-4f21-9c9c-e853669e312f-kube-api-access-5c9bk\") pod \"migrator-59844c95c7-flg7d\" (UID: \"408486ff-077c-4f21-9c9c-e853669e312f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216421 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27e9c527-e726-412a-ac42-d8b8974f136f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdwvj\" (UID: \"27e9c527-e726-412a-ac42-d8b8974f136f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216446 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/749453c5-3458-41d2-b1ab-aca8e018cfd5-trusted-ca\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216469 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab5bee17-16ff-4e1b-9868-69443e2b10d4-metrics-tls\") pod \"dns-default-86hv6\" (UID: \"ab5bee17-16ff-4e1b-9868-69443e2b10d4\") " pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216490 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrx8p\" (UniqueName: \"kubernetes.io/projected/8b08cc06-0056-43c0-a73f-9070d99cc0b5-kube-api-access-xrx8p\") pod \"ingress-canary-wdshf\" (UID: \"8b08cc06-0056-43c0-a73f-9070d99cc0b5\") " pod="openshift-ingress-canary/ingress-canary-wdshf" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216534 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-client-ca\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216556 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brljv\" (UniqueName: \"kubernetes.io/projected/87c5cf64-6aee-4660-9847-5161a05a0410-kube-api-access-brljv\") pod \"multus-admission-controller-857f4d67dd-shgxh\" (UID: \"87c5cf64-6aee-4660-9847-5161a05a0410\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216581 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6cee23cd-f0ff-4954-9497-69b2097a34f1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bhmhd\" (UID: \"6cee23cd-f0ff-4954-9497-69b2097a34f1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216607 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhsb\" (UniqueName: \"kubernetes.io/projected/ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb-kube-api-access-2lhsb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rzdxr\" (UID: \"ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216635 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5pwt\" (UniqueName: \"kubernetes.io/projected/7792e427-5573-4b37-858e-3c40b4f37505-kube-api-access-z5pwt\") pod \"downloads-7954f5f757-hwh5m\" (UID: \"7792e427-5573-4b37-858e-3c40b4f37505\") " pod="openshift-console/downloads-7954f5f757-hwh5m" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216675 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rzdxr\" (UID: \"ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216698 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zw6vx\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216720 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zw6vx\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216758 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/846a8d85-beb7-4c48-9705-59ed68378f4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4xshp\" (UID: \"846a8d85-beb7-4c48-9705-59ed68378f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216782 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-config\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216809 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkhgz\" (UniqueName: \"kubernetes.io/projected/20fbe5b6-4f07-41fd-a5f8-05c9d2c71089-kube-api-access-nkhgz\") pod \"machine-config-controller-84d6567774-nlrpv\" (UID: \"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216841 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dfc152d3-9326-4602-8b02-c9fbc8f73199-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216864 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz727\" (UniqueName: \"kubernetes.io/projected/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-kube-api-access-fz727\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216902 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f12eb2-abac-4859-870d-72555b13cda8-config\") pod \"kube-apiserver-operator-766d6c64bb-mcmtw\" (UID: \"98f12eb2-abac-4859-870d-72555b13cda8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc4lz\" (UniqueName: \"kubernetes.io/projected/61a2a996-37aa-420f-b45e-9776c269d9dd-kube-api-access-kc4lz\") pod \"service-ca-operator-777779d784-9r2j9\" (UID: \"61a2a996-37aa-420f-b45e-9776c269d9dd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216946 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mqtt\" (UniqueName: \"kubernetes.io/projected/14cacc6a-664d-4560-875d-55e4c731671a-kube-api-access-6mqtt\") pod \"dns-operator-744455d44c-qcdcz\" (UID: \"14cacc6a-664d-4560-875d-55e4c731671a\") " pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.216973 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/119b17a5-9014-4cea-b2cc-32e410c88465-apiservice-cert\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217021 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f436432a-f92b-4b2a-89a9-8014f487dc12-config-volume\") pod \"collect-profiles-29525085-npsvp\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217043 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-registration-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217085 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmfqc\" (UniqueName: \"kubernetes.io/projected/b06f1db6-3e1a-4db7-ad72-588f9900223a-kube-api-access-xmfqc\") pod \"openshift-apiserver-operator-796bbdcf4f-nwx99\" (UID: \"b06f1db6-3e1a-4db7-ad72-588f9900223a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217108 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/360a44b7-5f42-42d7-918c-226761dbbd2c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217130 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-serving-cert\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217152 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz44h\" (UniqueName: \"kubernetes.io/projected/f436432a-f92b-4b2a-89a9-8014f487dc12-kube-api-access-mz44h\") pod \"collect-profiles-29525085-npsvp\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217171 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdprr\" (UniqueName: \"kubernetes.io/projected/6cee23cd-f0ff-4954-9497-69b2097a34f1-kube-api-access-qdprr\") pod \"olm-operator-6b444d44fb-bhmhd\" (UID: \"6cee23cd-f0ff-4954-9497-69b2097a34f1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217194 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/749453c5-3458-41d2-b1ab-aca8e018cfd5-metrics-tls\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217217 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-certificates\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217238 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/749453c5-3458-41d2-b1ab-aca8e018cfd5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217259 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wdmq\" (UniqueName: \"kubernetes.io/projected/e0f8a770-7d1a-430b-8a25-aa325b17c767-kube-api-access-2wdmq\") pod \"service-ca-9c57cc56f-4bk7q\" (UID: \"e0f8a770-7d1a-430b-8a25-aa325b17c767\") " pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217318 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svwbn\" (UniqueName: \"kubernetes.io/projected/ab5bee17-16ff-4e1b-9868-69443e2b10d4-kube-api-access-svwbn\") pod \"dns-default-86hv6\" (UID: \"ab5bee17-16ff-4e1b-9868-69443e2b10d4\") " pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217339 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88c5b880-561a-4961-91be-bc1ee9bdd96b-metrics-certs\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217361 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-oauth-serving-cert\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217384 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d118e366-ab6c-41e1-9aae-c993e9125fd4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8cbn5\" (UID: \"d118e366-ab6c-41e1-9aae-c993e9125fd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217406 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d118e366-ab6c-41e1-9aae-c993e9125fd4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8cbn5\" (UID: \"d118e366-ab6c-41e1-9aae-c993e9125fd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217429 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbqb\" (UniqueName: \"kubernetes.io/projected/a3b401b0-2191-450a-9d87-e8066678f93b-kube-api-access-9bbqb\") pod \"machine-config-server-xjng4\" (UID: \"a3b401b0-2191-450a-9d87-e8066678f93b\") " pod="openshift-machine-config-operator/machine-config-server-xjng4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217449 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a3b401b0-2191-450a-9d87-e8066678f93b-node-bootstrap-token\") pod \"machine-config-server-xjng4\" (UID: \"a3b401b0-2191-450a-9d87-e8066678f93b\") " pod="openshift-machine-config-operator/machine-config-server-xjng4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217468 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a3b401b0-2191-450a-9d87-e8066678f93b-certs\") pod \"machine-config-server-xjng4\" (UID: \"a3b401b0-2191-450a-9d87-e8066678f93b\") " pod="openshift-machine-config-operator/machine-config-server-xjng4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217511 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f436432a-f92b-4b2a-89a9-8014f487dc12-secret-volume\") pod \"collect-profiles-29525085-npsvp\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217535 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-plugins-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217560 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-etcd-ca\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217580 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-config\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217609 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab5bee17-16ff-4e1b-9868-69443e2b10d4-config-volume\") pod \"dns-default-86hv6\" (UID: \"ab5bee17-16ff-4e1b-9868-69443e2b10d4\") " pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217621 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-config\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.217630 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b08cc06-0056-43c0-a73f-9070d99cc0b5-cert\") pod \"ingress-canary-wdshf\" (UID: \"8b08cc06-0056-43c0-a73f-9070d99cc0b5\") " pod="openshift-ingress-canary/ingress-canary-wdshf" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218139 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzzjp\" (UniqueName: \"kubernetes.io/projected/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-kube-api-access-tzzjp\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218178 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98f12eb2-abac-4859-870d-72555b13cda8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mcmtw\" (UID: \"98f12eb2-abac-4859-870d-72555b13cda8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218208 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4v7n\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-kube-api-access-c4v7n\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218240 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6cee23cd-f0ff-4954-9497-69b2097a34f1-srv-cert\") pod \"olm-operator-6b444d44fb-bhmhd\" (UID: \"6cee23cd-f0ff-4954-9497-69b2097a34f1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218269 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88c5b880-561a-4961-91be-bc1ee9bdd96b-service-ca-bundle\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218296 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlg6w\" (UniqueName: \"kubernetes.io/projected/360a44b7-5f42-42d7-918c-226761dbbd2c-kube-api-access-vlg6w\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218323 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfswp\" (UniqueName: \"kubernetes.io/projected/749453c5-3458-41d2-b1ab-aca8e018cfd5-kube-api-access-sfswp\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218348 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/360a44b7-5f42-42d7-918c-226761dbbd2c-proxy-tls\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218356 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dfc152d3-9326-4602-8b02-c9fbc8f73199-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218370 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chbb9\" (UniqueName: \"kubernetes.io/projected/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-kube-api-access-chbb9\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218394 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fj58\" (UniqueName: \"kubernetes.io/projected/88c5b880-561a-4961-91be-bc1ee9bdd96b-kube-api-access-4fj58\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218419 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-config\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218447 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgdvm\" (UniqueName: \"kubernetes.io/projected/27e9c527-e726-412a-ac42-d8b8974f136f-kube-api-access-dgdvm\") pod \"cluster-samples-operator-665b6dd947-qdwvj\" (UID: \"27e9c527-e726-412a-ac42-d8b8974f136f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218567 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-trusted-ca\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218599 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-oauth-config\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218625 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b06f1db6-3e1a-4db7-ad72-588f9900223a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nwx99\" (UID: \"b06f1db6-3e1a-4db7-ad72-588f9900223a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.218648 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc-srv-cert\") pod \"catalog-operator-68c6474976-mzk7x\" (UID: \"b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.219679 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/749453c5-3458-41d2-b1ab-aca8e018cfd5-trusted-ca\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.220476 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-oauth-serving-cert\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.220566 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-config\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.220632 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dfc152d3-9326-4602-8b02-c9fbc8f73199-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.220663 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a2a996-37aa-420f-b45e-9776c269d9dd-config\") pod \"service-ca-operator-777779d784-9r2j9\" (UID: \"61a2a996-37aa-420f-b45e-9776c269d9dd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.220718 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20fbe5b6-4f07-41fd-a5f8-05c9d2c71089-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nlrpv\" (UID: \"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.220781 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-etcd-ca\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.220961 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.221058 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-serving-cert\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.221289 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-trusted-ca\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.221345 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvvr\" (UniqueName: \"kubernetes.io/projected/ea6cc7f7-b2fa-40d4-93cd-795a01861ecb-kube-api-access-dwvvr\") pod \"control-plane-machine-set-operator-78cbb6b69f-4gj46\" (UID: \"ea6cc7f7-b2fa-40d4-93cd-795a01861ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46" Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.221385 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:51.721360636 +0000 UTC m=+142.116879394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.221805 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/749453c5-3458-41d2-b1ab-aca8e018cfd5-metrics-tls\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.221951 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-config\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.222345 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/846a8d85-beb7-4c48-9705-59ed68378f4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4xshp\" (UID: \"846a8d85-beb7-4c48-9705-59ed68378f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.222550 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-mountpoint-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.222596 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20fbe5b6-4f07-41fd-a5f8-05c9d2c71089-proxy-tls\") pod \"machine-config-controller-84d6567774-nlrpv\" (UID: \"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.222652 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-service-ca\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.222698 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f12eb2-abac-4859-870d-72555b13cda8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mcmtw\" (UID: \"98f12eb2-abac-4859-870d-72555b13cda8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.223137 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-trusted-ca\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.223908 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-service-ca\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.224607 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-serving-cert\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.225002 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.225297 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-socket-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.225358 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7fts\" (UniqueName: \"kubernetes.io/projected/119b17a5-9014-4cea-b2cc-32e410c88465-kube-api-access-x7fts\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.225392 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcnrz\" (UniqueName: \"kubernetes.io/projected/7d7f1229-1f55-416b-beeb-60a3ae0abc62-kube-api-access-lcnrz\") pod \"marketplace-operator-79b997595-zw6vx\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.225444 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-csi-data-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.225731 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-client-ca\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.225804 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rzdxr\" (UID: \"ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.225940 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea6cc7f7-b2fa-40d4-93cd-795a01861ecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4gj46\" (UID: \"ea6cc7f7-b2fa-40d4-93cd-795a01861ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.225964 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/88c5b880-561a-4961-91be-bc1ee9bdd96b-stats-auth\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.225982 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/119b17a5-9014-4cea-b2cc-32e410c88465-tmpfs\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.225986 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/27e9c527-e726-412a-ac42-d8b8974f136f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qdwvj\" (UID: \"27e9c527-e726-412a-ac42-d8b8974f136f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226007 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-trusted-ca-bundle\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226025 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e0f8a770-7d1a-430b-8a25-aa325b17c767-signing-key\") pod \"service-ca-9c57cc56f-4bk7q\" (UID: \"e0f8a770-7d1a-430b-8a25-aa325b17c767\") " pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226044 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a8d85-beb7-4c48-9705-59ed68378f4c-config\") pod \"kube-controller-manager-operator-78b949d7b-4xshp\" (UID: \"846a8d85-beb7-4c48-9705-59ed68378f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226073 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-etcd-service-ca\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226089 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-config\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226111 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06f1db6-3e1a-4db7-ad72-588f9900223a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nwx99\" (UID: \"b06f1db6-3e1a-4db7-ad72-588f9900223a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226122 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-trusted-ca\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226125 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-certificates\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226131 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e0f8a770-7d1a-430b-8a25-aa325b17c767-signing-cabundle\") pod \"service-ca-9c57cc56f-4bk7q\" (UID: \"e0f8a770-7d1a-430b-8a25-aa325b17c767\") " pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226234 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9qn\" (UniqueName: \"kubernetes.io/projected/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-kube-api-access-qr9qn\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226289 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14cacc6a-664d-4560-875d-55e4c731671a-metrics-tls\") pod \"dns-operator-744455d44c-qcdcz\" (UID: \"14cacc6a-664d-4560-875d-55e4c731671a\") " pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226433 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-serving-cert\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226528 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d118e366-ab6c-41e1-9aae-c993e9125fd4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8cbn5\" (UID: \"d118e366-ab6c-41e1-9aae-c993e9125fd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226616 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m2t7\" (UniqueName: \"kubernetes.io/projected/b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc-kube-api-access-4m2t7\") pod \"catalog-operator-68c6474976-mzk7x\" (UID: \"b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.226660 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61a2a996-37aa-420f-b45e-9776c269d9dd-serving-cert\") pod \"service-ca-operator-777779d784-9r2j9\" (UID: \"61a2a996-37aa-420f-b45e-9776c269d9dd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.227042 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-etcd-service-ca\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.227287 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-trusted-ca-bundle\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.228682 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-etcd-client\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.229203 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-client-ca\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.230031 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-serving-cert\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.230326 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-config\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.230677 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-oauth-config\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.230852 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-tls\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.231308 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" event={"ID":"e58cf141-d5f2-4832-ab5c-067a3674cbb8","Type":"ContainerStarted","Data":"02feeb5adfc10e970d777cec936fe7ef06dbae9922f216affd36fb2a577cef46"} Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.221372 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98f12eb2-abac-4859-870d-72555b13cda8-config\") pod \"kube-apiserver-operator-766d6c64bb-mcmtw\" (UID: \"98f12eb2-abac-4859-870d-72555b13cda8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.236617 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06f1db6-3e1a-4db7-ad72-588f9900223a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nwx99\" (UID: \"b06f1db6-3e1a-4db7-ad72-588f9900223a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.236849 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dfc152d3-9326-4602-8b02-c9fbc8f73199-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.236992 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" event={"ID":"8a863328-15b8-46bc-9ffd-faa97add46ea","Type":"ContainerStarted","Data":"067233be254fc4b70c35a6715732cf49bacb4c25f5e92b3c99395f55a8c06bbd"} Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.237115 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-serving-cert\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.236644 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0b921-253c-44e3-8abd-616a4c22825c-serving-cert\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.239083 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b06f1db6-3e1a-4db7-ad72-588f9900223a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nwx99\" (UID: \"b06f1db6-3e1a-4db7-ad72-588f9900223a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.245045 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5pwt\" (UniqueName: \"kubernetes.io/projected/7792e427-5573-4b37-858e-3c40b4f37505-kube-api-access-z5pwt\") pod \"downloads-7954f5f757-hwh5m\" (UID: \"7792e427-5573-4b37-858e-3c40b4f37505\") " pod="openshift-console/downloads-7954f5f757-hwh5m" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.263677 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-66dsh"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.264593 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/98f12eb2-abac-4859-870d-72555b13cda8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mcmtw\" (UID: \"98f12eb2-abac-4859-870d-72555b13cda8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.267027 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/749453c5-3458-41d2-b1ab-aca8e018cfd5-bound-sa-token\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: W0219 12:48:51.277149 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bcce72d_6a5d_42d2_b7ed_c721057061f6.slice/crio-f95f42bcb62df035a2217d86beb9f3e33e0952a1fdfd8a7adefebdb628a7593e WatchSource:0}: Error finding container f95f42bcb62df035a2217d86beb9f3e33e0952a1fdfd8a7adefebdb628a7593e: Status 404 returned error can't find the container with id f95f42bcb62df035a2217d86beb9f3e33e0952a1fdfd8a7adefebdb628a7593e Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.285905 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbmmc\" (UniqueName: \"kubernetes.io/projected/17e0b921-253c-44e3-8abd-616a4c22825c-kube-api-access-gbmmc\") pod \"route-controller-manager-6576b87f9c-vzmp9\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.296715 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.297098 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.309970 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz727\" (UniqueName: \"kubernetes.io/projected/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-kube-api-access-fz727\") pod \"console-f9d7485db-zjv88\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.315988 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hwh5m" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.325104 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmfqc\" (UniqueName: \"kubernetes.io/projected/b06f1db6-3e1a-4db7-ad72-588f9900223a-kube-api-access-xmfqc\") pod \"openshift-apiserver-operator-796bbdcf4f-nwx99\" (UID: \"b06f1db6-3e1a-4db7-ad72-588f9900223a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327292 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327567 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c9bk\" (UniqueName: \"kubernetes.io/projected/408486ff-077c-4f21-9c9c-e853669e312f-kube-api-access-5c9bk\") pod \"migrator-59844c95c7-flg7d\" (UID: \"408486ff-077c-4f21-9c9c-e853669e312f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327598 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab5bee17-16ff-4e1b-9868-69443e2b10d4-metrics-tls\") pod \"dns-default-86hv6\" (UID: \"ab5bee17-16ff-4e1b-9868-69443e2b10d4\") " pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327626 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrx8p\" (UniqueName: \"kubernetes.io/projected/8b08cc06-0056-43c0-a73f-9070d99cc0b5-kube-api-access-xrx8p\") pod \"ingress-canary-wdshf\" (UID: \"8b08cc06-0056-43c0-a73f-9070d99cc0b5\") " pod="openshift-ingress-canary/ingress-canary-wdshf" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327648 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brljv\" (UniqueName: \"kubernetes.io/projected/87c5cf64-6aee-4660-9847-5161a05a0410-kube-api-access-brljv\") pod \"multus-admission-controller-857f4d67dd-shgxh\" (UID: \"87c5cf64-6aee-4660-9847-5161a05a0410\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327670 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-client-ca\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327692 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6cee23cd-f0ff-4954-9497-69b2097a34f1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bhmhd\" (UID: \"6cee23cd-f0ff-4954-9497-69b2097a34f1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327712 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lhsb\" (UniqueName: \"kubernetes.io/projected/ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb-kube-api-access-2lhsb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rzdxr\" (UID: \"ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327734 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zw6vx\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327755 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rzdxr\" (UID: \"ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327807 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zw6vx\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327826 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/846a8d85-beb7-4c48-9705-59ed68378f4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4xshp\" (UID: \"846a8d85-beb7-4c48-9705-59ed68378f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327845 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-config\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327866 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkhgz\" (UniqueName: \"kubernetes.io/projected/20fbe5b6-4f07-41fd-a5f8-05c9d2c71089-kube-api-access-nkhgz\") pod \"machine-config-controller-84d6567774-nlrpv\" (UID: \"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327891 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc4lz\" (UniqueName: \"kubernetes.io/projected/61a2a996-37aa-420f-b45e-9776c269d9dd-kube-api-access-kc4lz\") pod \"service-ca-operator-777779d784-9r2j9\" (UID: \"61a2a996-37aa-420f-b45e-9776c269d9dd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327916 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/119b17a5-9014-4cea-b2cc-32e410c88465-apiservice-cert\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327939 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mqtt\" (UniqueName: \"kubernetes.io/projected/14cacc6a-664d-4560-875d-55e4c731671a-kube-api-access-6mqtt\") pod \"dns-operator-744455d44c-qcdcz\" (UID: \"14cacc6a-664d-4560-875d-55e4c731671a\") " pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327962 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-registration-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.327995 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f436432a-f92b-4b2a-89a9-8014f487dc12-config-volume\") pod \"collect-profiles-29525085-npsvp\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328018 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/360a44b7-5f42-42d7-918c-226761dbbd2c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328043 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz44h\" (UniqueName: \"kubernetes.io/projected/f436432a-f92b-4b2a-89a9-8014f487dc12-kube-api-access-mz44h\") pod \"collect-profiles-29525085-npsvp\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328066 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdprr\" (UniqueName: \"kubernetes.io/projected/6cee23cd-f0ff-4954-9497-69b2097a34f1-kube-api-access-qdprr\") pod \"olm-operator-6b444d44fb-bhmhd\" (UID: \"6cee23cd-f0ff-4954-9497-69b2097a34f1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328089 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wdmq\" (UniqueName: \"kubernetes.io/projected/e0f8a770-7d1a-430b-8a25-aa325b17c767-kube-api-access-2wdmq\") pod \"service-ca-9c57cc56f-4bk7q\" (UID: \"e0f8a770-7d1a-430b-8a25-aa325b17c767\") " pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328111 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svwbn\" (UniqueName: \"kubernetes.io/projected/ab5bee17-16ff-4e1b-9868-69443e2b10d4-kube-api-access-svwbn\") pod \"dns-default-86hv6\" (UID: \"ab5bee17-16ff-4e1b-9868-69443e2b10d4\") " pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328131 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88c5b880-561a-4961-91be-bc1ee9bdd96b-metrics-certs\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328155 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d118e366-ab6c-41e1-9aae-c993e9125fd4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8cbn5\" (UID: \"d118e366-ab6c-41e1-9aae-c993e9125fd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328204 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d118e366-ab6c-41e1-9aae-c993e9125fd4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8cbn5\" (UID: \"d118e366-ab6c-41e1-9aae-c993e9125fd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328225 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbqb\" (UniqueName: \"kubernetes.io/projected/a3b401b0-2191-450a-9d87-e8066678f93b-kube-api-access-9bbqb\") pod \"machine-config-server-xjng4\" (UID: \"a3b401b0-2191-450a-9d87-e8066678f93b\") " pod="openshift-machine-config-operator/machine-config-server-xjng4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328244 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f436432a-f92b-4b2a-89a9-8014f487dc12-secret-volume\") pod \"collect-profiles-29525085-npsvp\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328263 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-plugins-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328281 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a3b401b0-2191-450a-9d87-e8066678f93b-node-bootstrap-token\") pod \"machine-config-server-xjng4\" (UID: \"a3b401b0-2191-450a-9d87-e8066678f93b\") " pod="openshift-machine-config-operator/machine-config-server-xjng4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a3b401b0-2191-450a-9d87-e8066678f93b-certs\") pod \"machine-config-server-xjng4\" (UID: \"a3b401b0-2191-450a-9d87-e8066678f93b\") " pod="openshift-machine-config-operator/machine-config-server-xjng4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328322 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab5bee17-16ff-4e1b-9868-69443e2b10d4-config-volume\") pod \"dns-default-86hv6\" (UID: \"ab5bee17-16ff-4e1b-9868-69443e2b10d4\") " pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328343 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b08cc06-0056-43c0-a73f-9070d99cc0b5-cert\") pod \"ingress-canary-wdshf\" (UID: \"8b08cc06-0056-43c0-a73f-9070d99cc0b5\") " pod="openshift-ingress-canary/ingress-canary-wdshf" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328374 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6cee23cd-f0ff-4954-9497-69b2097a34f1-srv-cert\") pod \"olm-operator-6b444d44fb-bhmhd\" (UID: \"6cee23cd-f0ff-4954-9497-69b2097a34f1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328395 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88c5b880-561a-4961-91be-bc1ee9bdd96b-service-ca-bundle\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328423 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlg6w\" (UniqueName: \"kubernetes.io/projected/360a44b7-5f42-42d7-918c-226761dbbd2c-kube-api-access-vlg6w\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328445 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fj58\" (UniqueName: \"kubernetes.io/projected/88c5b880-561a-4961-91be-bc1ee9bdd96b-kube-api-access-4fj58\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/360a44b7-5f42-42d7-918c-226761dbbd2c-proxy-tls\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328514 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chbb9\" (UniqueName: \"kubernetes.io/projected/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-kube-api-access-chbb9\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328556 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc-srv-cert\") pod \"catalog-operator-68c6474976-mzk7x\" (UID: \"b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328578 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20fbe5b6-4f07-41fd-a5f8-05c9d2c71089-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nlrpv\" (UID: \"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328598 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a2a996-37aa-420f-b45e-9776c269d9dd-config\") pod \"service-ca-operator-777779d784-9r2j9\" (UID: \"61a2a996-37aa-420f-b45e-9776c269d9dd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328635 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvvr\" (UniqueName: \"kubernetes.io/projected/ea6cc7f7-b2fa-40d4-93cd-795a01861ecb-kube-api-access-dwvvr\") pod \"control-plane-machine-set-operator-78cbb6b69f-4gj46\" (UID: \"ea6cc7f7-b2fa-40d4-93cd-795a01861ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/846a8d85-beb7-4c48-9705-59ed68378f4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4xshp\" (UID: \"846a8d85-beb7-4c48-9705-59ed68378f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328676 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-mountpoint-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328695 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20fbe5b6-4f07-41fd-a5f8-05c9d2c71089-proxy-tls\") pod \"machine-config-controller-84d6567774-nlrpv\" (UID: \"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328719 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcnrz\" (UniqueName: \"kubernetes.io/projected/7d7f1229-1f55-416b-beeb-60a3ae0abc62-kube-api-access-lcnrz\") pod \"marketplace-operator-79b997595-zw6vx\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328741 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328761 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-socket-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328781 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7fts\" (UniqueName: \"kubernetes.io/projected/119b17a5-9014-4cea-b2cc-32e410c88465-kube-api-access-x7fts\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328803 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-csi-data-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328823 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rzdxr\" (UID: \"ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328854 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea6cc7f7-b2fa-40d4-93cd-795a01861ecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4gj46\" (UID: \"ea6cc7f7-b2fa-40d4-93cd-795a01861ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328875 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/88c5b880-561a-4961-91be-bc1ee9bdd96b-stats-auth\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328904 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e0f8a770-7d1a-430b-8a25-aa325b17c767-signing-key\") pod \"service-ca-9c57cc56f-4bk7q\" (UID: \"e0f8a770-7d1a-430b-8a25-aa325b17c767\") " pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328924 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a8d85-beb7-4c48-9705-59ed68378f4c-config\") pod \"kube-controller-manager-operator-78b949d7b-4xshp\" (UID: \"846a8d85-beb7-4c48-9705-59ed68378f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328944 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/119b17a5-9014-4cea-b2cc-32e410c88465-tmpfs\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328966 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e0f8a770-7d1a-430b-8a25-aa325b17c767-signing-cabundle\") pod \"service-ca-9c57cc56f-4bk7q\" (UID: \"e0f8a770-7d1a-430b-8a25-aa325b17c767\") " pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.328992 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14cacc6a-664d-4560-875d-55e4c731671a-metrics-tls\") pod \"dns-operator-744455d44c-qcdcz\" (UID: \"14cacc6a-664d-4560-875d-55e4c731671a\") " pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329022 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d118e366-ab6c-41e1-9aae-c993e9125fd4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8cbn5\" (UID: \"d118e366-ab6c-41e1-9aae-c993e9125fd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329042 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m2t7\" (UniqueName: \"kubernetes.io/projected/b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc-kube-api-access-4m2t7\") pod \"catalog-operator-68c6474976-mzk7x\" (UID: \"b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329059 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61a2a996-37aa-420f-b45e-9776c269d9dd-serving-cert\") pod \"service-ca-operator-777779d784-9r2j9\" (UID: \"61a2a996-37aa-420f-b45e-9776c269d9dd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329084 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/119b17a5-9014-4cea-b2cc-32e410c88465-webhook-cert\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329105 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45708559-b521-4b8d-a745-12119e61a8cb-serving-cert\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329127 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/88c5b880-561a-4961-91be-bc1ee9bdd96b-default-certificate\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329153 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-877dl\" (UniqueName: \"kubernetes.io/projected/45708559-b521-4b8d-a745-12119e61a8cb-kube-api-access-877dl\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329177 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87c5cf64-6aee-4660-9847-5161a05a0410-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-shgxh\" (UID: \"87c5cf64-6aee-4660-9847-5161a05a0410\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329197 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/360a44b7-5f42-42d7-918c-226761dbbd2c-images\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329220 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c314e60b-4099-42e8-9eff-e3ef54025cc3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bpx5m\" (UID: \"c314e60b-4099-42e8-9eff-e3ef54025cc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329254 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc-profile-collector-cert\") pod \"catalog-operator-68c6474976-mzk7x\" (UID: \"b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329276 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvrh9\" (UniqueName: \"kubernetes.io/projected/c314e60b-4099-42e8-9eff-e3ef54025cc3-kube-api-access-zvrh9\") pod \"package-server-manager-789f6589d5-bpx5m\" (UID: \"c314e60b-4099-42e8-9eff-e3ef54025cc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.329355 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:51.82933789 +0000 UTC m=+142.224856658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.329950 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-client-ca\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.330058 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab5bee17-16ff-4e1b-9868-69443e2b10d4-config-volume\") pod \"dns-default-86hv6\" (UID: \"ab5bee17-16ff-4e1b-9868-69443e2b10d4\") " pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.330110 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d118e366-ab6c-41e1-9aae-c993e9125fd4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8cbn5\" (UID: \"d118e366-ab6c-41e1-9aae-c993e9125fd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.331450 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-config\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.332380 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rzdxr\" (UID: \"ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.335276 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-registration-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.335710 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/846a8d85-beb7-4c48-9705-59ed68378f4c-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-4xshp\" (UID: \"846a8d85-beb7-4c48-9705-59ed68378f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.335664 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/360a44b7-5f42-42d7-918c-226761dbbd2c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.337191 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/119b17a5-9014-4cea-b2cc-32e410c88465-tmpfs\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.337421 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/846a8d85-beb7-4c48-9705-59ed68378f4c-config\") pod \"kube-controller-manager-operator-78b949d7b-4xshp\" (UID: \"846a8d85-beb7-4c48-9705-59ed68378f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.339557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-mountpoint-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.339608 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-socket-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.340018 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61a2a996-37aa-420f-b45e-9776c269d9dd-config\") pod \"service-ca-operator-777779d784-9r2j9\" (UID: \"61a2a996-37aa-420f-b45e-9776c269d9dd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.341006 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-plugins-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.341456 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88c5b880-561a-4961-91be-bc1ee9bdd96b-service-ca-bundle\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.341911 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/88c5b880-561a-4961-91be-bc1ee9bdd96b-stats-auth\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.342337 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.342423 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/20fbe5b6-4f07-41fd-a5f8-05c9d2c71089-proxy-tls\") pod \"machine-config-controller-84d6567774-nlrpv\" (UID: \"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.342772 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-csi-data-dir\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.343079 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc-profile-collector-cert\") pod \"catalog-operator-68c6474976-mzk7x\" (UID: \"b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.344301 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/20fbe5b6-4f07-41fd-a5f8-05c9d2c71089-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-nlrpv\" (UID: \"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.344427 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/ea6cc7f7-b2fa-40d4-93cd-795a01861ecb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4gj46\" (UID: \"ea6cc7f7-b2fa-40d4-93cd-795a01861ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.344755 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc-srv-cert\") pod \"catalog-operator-68c6474976-mzk7x\" (UID: \"b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.345080 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/e0f8a770-7d1a-430b-8a25-aa325b17c767-signing-cabundle\") pod \"service-ca-9c57cc56f-4bk7q\" (UID: \"e0f8a770-7d1a-430b-8a25-aa325b17c767\") " pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.345710 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f436432a-f92b-4b2a-89a9-8014f487dc12-config-volume\") pod \"collect-profiles-29525085-npsvp\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.348214 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/14cacc6a-664d-4560-875d-55e4c731671a-metrics-tls\") pod \"dns-operator-744455d44c-qcdcz\" (UID: \"14cacc6a-664d-4560-875d-55e4c731671a\") " pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.348825 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6cee23cd-f0ff-4954-9497-69b2097a34f1-srv-cert\") pod \"olm-operator-6b444d44fb-bhmhd\" (UID: \"6cee23cd-f0ff-4954-9497-69b2097a34f1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.349315 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/119b17a5-9014-4cea-b2cc-32e410c88465-webhook-cert\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.350732 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/360a44b7-5f42-42d7-918c-226761dbbd2c-images\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.350992 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zw6vx\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.352102 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/360a44b7-5f42-42d7-918c-226761dbbd2c-proxy-tls\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.352676 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/e0f8a770-7d1a-430b-8a25-aa325b17c767-signing-key\") pod \"service-ca-9c57cc56f-4bk7q\" (UID: \"e0f8a770-7d1a-430b-8a25-aa325b17c767\") " pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.353942 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a3b401b0-2191-450a-9d87-e8066678f93b-certs\") pod \"machine-config-server-xjng4\" (UID: \"a3b401b0-2191-450a-9d87-e8066678f93b\") " pod="openshift-machine-config-operator/machine-config-server-xjng4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.354023 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zw6vx\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.354051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45708559-b521-4b8d-a745-12119e61a8cb-serving-cert\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.354086 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a3b401b0-2191-450a-9d87-e8066678f93b-node-bootstrap-token\") pod \"machine-config-server-xjng4\" (UID: \"a3b401b0-2191-450a-9d87-e8066678f93b\") " pod="openshift-machine-config-operator/machine-config-server-xjng4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.354358 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8b08cc06-0056-43c0-a73f-9070d99cc0b5-cert\") pod \"ingress-canary-wdshf\" (UID: \"8b08cc06-0056-43c0-a73f-9070d99cc0b5\") " pod="openshift-ingress-canary/ingress-canary-wdshf" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.354564 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c314e60b-4099-42e8-9eff-e3ef54025cc3-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-bpx5m\" (UID: \"c314e60b-4099-42e8-9eff-e3ef54025cc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.354984 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d118e366-ab6c-41e1-9aae-c993e9125fd4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8cbn5\" (UID: \"d118e366-ab6c-41e1-9aae-c993e9125fd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.356104 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88c5b880-561a-4961-91be-bc1ee9bdd96b-metrics-certs\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.356626 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6cee23cd-f0ff-4954-9497-69b2097a34f1-profile-collector-cert\") pod \"olm-operator-6b444d44fb-bhmhd\" (UID: \"6cee23cd-f0ff-4954-9497-69b2097a34f1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.357182 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61a2a996-37aa-420f-b45e-9776c269d9dd-serving-cert\") pod \"service-ca-operator-777779d784-9r2j9\" (UID: \"61a2a996-37aa-420f-b45e-9776c269d9dd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.357277 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/88c5b880-561a-4961-91be-bc1ee9bdd96b-default-certificate\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.357348 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rzdxr\" (UID: \"ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.358141 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab5bee17-16ff-4e1b-9868-69443e2b10d4-metrics-tls\") pod \"dns-default-86hv6\" (UID: \"ab5bee17-16ff-4e1b-9868-69443e2b10d4\") " pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.358516 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-bound-sa-token\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.358944 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87c5cf64-6aee-4660-9847-5161a05a0410-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-shgxh\" (UID: \"87c5cf64-6aee-4660-9847-5161a05a0410\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.359962 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/119b17a5-9014-4cea-b2cc-32e410c88465-apiservice-cert\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.369008 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfswp\" (UniqueName: \"kubernetes.io/projected/749453c5-3458-41d2-b1ab-aca8e018cfd5-kube-api-access-sfswp\") pod \"ingress-operator-5b745b69d9-k855k\" (UID: \"749453c5-3458-41d2-b1ab-aca8e018cfd5\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.374208 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f436432a-f92b-4b2a-89a9-8014f487dc12-secret-volume\") pod \"collect-profiles-29525085-npsvp\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.385319 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzzjp\" (UniqueName: \"kubernetes.io/projected/38ba40b9-14c2-419d-a54f-9dbc0f1ada2f-kube-api-access-tzzjp\") pod \"etcd-operator-b45778765-wqw4k\" (UID: \"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.403645 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/98f12eb2-abac-4859-870d-72555b13cda8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mcmtw\" (UID: \"98f12eb2-abac-4859-870d-72555b13cda8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.425105 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.430705 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4v7n\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-kube-api-access-c4v7n\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.431959 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.432537 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:51.932521253 +0000 UTC m=+142.328040021 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.454096 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgdvm\" (UniqueName: \"kubernetes.io/projected/27e9c527-e726-412a-ac42-d8b8974f136f-kube-api-access-dgdvm\") pod \"cluster-samples-operator-665b6dd947-qdwvj\" (UID: \"27e9c527-e726-412a-ac42-d8b8974f136f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.486126 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr9qn\" (UniqueName: \"kubernetes.io/projected/59ef1c2f-bfb4-45af-9ae4-8d0455a5691d-kube-api-access-qr9qn\") pod \"console-operator-58897d9998-z4dsv\" (UID: \"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d\") " pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.505002 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c9bk\" (UniqueName: \"kubernetes.io/projected/408486ff-077c-4f21-9c9c-e853669e312f-kube-api-access-5c9bk\") pod \"migrator-59844c95c7-flg7d\" (UID: \"408486ff-077c-4f21-9c9c-e853669e312f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.527137 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrx8p\" (UniqueName: \"kubernetes.io/projected/8b08cc06-0056-43c0-a73f-9070d99cc0b5-kube-api-access-xrx8p\") pod \"ingress-canary-wdshf\" (UID: \"8b08cc06-0056-43c0-a73f-9070d99cc0b5\") " pod="openshift-ingress-canary/ingress-canary-wdshf" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.533484 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.533818 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.534058 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.034033994 +0000 UTC m=+142.429552772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.534361 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.534706 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.03468959 +0000 UTC m=+142.430208358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.550906 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.558244 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvrh9\" (UniqueName: \"kubernetes.io/projected/c314e60b-4099-42e8-9eff-e3ef54025cc3-kube-api-access-zvrh9\") pod \"package-server-manager-789f6589d5-bpx5m\" (UID: \"c314e60b-4099-42e8-9eff-e3ef54025cc3\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.559948 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2kph"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.562184 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.573425 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brljv\" (UniqueName: \"kubernetes.io/projected/87c5cf64-6aee-4660-9847-5161a05a0410-kube-api-access-brljv\") pod \"multus-admission-controller-857f4d67dd-shgxh\" (UID: \"87c5cf64-6aee-4660-9847-5161a05a0410\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" Feb 19 12:48:51 crc kubenswrapper[4833]: W0219 12:48:51.583238 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd665cab8_36f2_4952_b4ca_75f832485488.slice/crio-dcfcbaba2fb454979a91c317e09b4a57f9e5adbf897e8e9696015d5bb1199cb1 WatchSource:0}: Error finding container dcfcbaba2fb454979a91c317e09b4a57f9e5adbf897e8e9696015d5bb1199cb1: Status 404 returned error can't find the container with id dcfcbaba2fb454979a91c317e09b4a57f9e5adbf897e8e9696015d5bb1199cb1 Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.586856 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.593389 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdprr\" (UniqueName: \"kubernetes.io/projected/6cee23cd-f0ff-4954-9497-69b2097a34f1-kube-api-access-qdprr\") pod \"olm-operator-6b444d44fb-bhmhd\" (UID: \"6cee23cd-f0ff-4954-9497-69b2097a34f1\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.596552 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" Feb 19 12:48:51 crc kubenswrapper[4833]: W0219 12:48:51.599509 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17e0b921_253c_44e3_8abd_616a4c22825c.slice/crio-56831085d07e26d8902e9d8dcafe610b45d8836b5309c21566a26ef29101a9f3 WatchSource:0}: Error finding container 56831085d07e26d8902e9d8dcafe610b45d8836b5309c21566a26ef29101a9f3: Status 404 returned error can't find the container with id 56831085d07e26d8902e9d8dcafe610b45d8836b5309c21566a26ef29101a9f3 Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.603813 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.605666 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hwh5m"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.616910 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mqtt\" (UniqueName: \"kubernetes.io/projected/14cacc6a-664d-4560-875d-55e4c731671a-kube-api-access-6mqtt\") pod \"dns-operator-744455d44c-qcdcz\" (UID: \"14cacc6a-664d-4560-875d-55e4c731671a\") " pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.624561 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cm4vr"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.625452 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lhsb\" (UniqueName: \"kubernetes.io/projected/ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb-kube-api-access-2lhsb\") pod \"kube-storage-version-migrator-operator-b67b599dd-rzdxr\" (UID: \"ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.631214 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.635163 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.636092 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.136070638 +0000 UTC m=+142.531589406 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.637057 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-wdshf" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.645214 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.647105 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkhgz\" (UniqueName: \"kubernetes.io/projected/20fbe5b6-4f07-41fd-a5f8-05c9d2c71089-kube-api-access-nkhgz\") pod \"machine-config-controller-84d6567774-nlrpv\" (UID: \"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" Feb 19 12:48:51 crc kubenswrapper[4833]: W0219 12:48:51.653741 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85474a25_567e_4ef0_be8a_75de8c7d18d9.slice/crio-750a99609df20ae10225262ae434a4b7e5d30109b002244b29b14374a1142074 WatchSource:0}: Error finding container 750a99609df20ae10225262ae434a4b7e5d30109b002244b29b14374a1142074: Status 404 returned error can't find the container with id 750a99609df20ae10225262ae434a4b7e5d30109b002244b29b14374a1142074 Feb 19 12:48:51 crc kubenswrapper[4833]: W0219 12:48:51.660330 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda50be962_42d3_4e0a_bf3d_13e00fd679a2.slice/crio-edf3ea07d9c681481f8743e279d434706f00e43ced8468cdede5edab09fecc3a WatchSource:0}: Error finding container edf3ea07d9c681481f8743e279d434706f00e43ced8468cdede5edab09fecc3a: Status 404 returned error can't find the container with id edf3ea07d9c681481f8743e279d434706f00e43ced8468cdede5edab09fecc3a Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.668332 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wdmq\" (UniqueName: \"kubernetes.io/projected/e0f8a770-7d1a-430b-8a25-aa325b17c767-kube-api-access-2wdmq\") pod \"service-ca-9c57cc56f-4bk7q\" (UID: \"e0f8a770-7d1a-430b-8a25-aa325b17c767\") " pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.668547 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.675257 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.680873 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zjv88"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.685134 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc4lz\" (UniqueName: \"kubernetes.io/projected/61a2a996-37aa-420f-b45e-9776c269d9dd-kube-api-access-kc4lz\") pod \"service-ca-operator-777779d784-9r2j9\" (UID: \"61a2a996-37aa-420f-b45e-9776c269d9dd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" Feb 19 12:48:51 crc kubenswrapper[4833]: W0219 12:48:51.700641 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dd5929b_7ec0_43c7_beb6_e3a3afbeec63.slice/crio-f3703f41ba8ea714c509bc765ff5ebde26279bad586041f16c0abda6e48d1678 WatchSource:0}: Error finding container f3703f41ba8ea714c509bc765ff5ebde26279bad586041f16c0abda6e48d1678: Status 404 returned error can't find the container with id f3703f41ba8ea714c509bc765ff5ebde26279bad586041f16c0abda6e48d1678 Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.702444 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.706258 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svwbn\" (UniqueName: \"kubernetes.io/projected/ab5bee17-16ff-4e1b-9868-69443e2b10d4-kube-api-access-svwbn\") pod \"dns-default-86hv6\" (UID: \"ab5bee17-16ff-4e1b-9868-69443e2b10d4\") " pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.727809 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbqb\" (UniqueName: \"kubernetes.io/projected/a3b401b0-2191-450a-9d87-e8066678f93b-kube-api-access-9bbqb\") pod \"machine-config-server-xjng4\" (UID: \"a3b401b0-2191-450a-9d87-e8066678f93b\") " pod="openshift-machine-config-operator/machine-config-server-xjng4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.736800 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.737883 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.738215 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.238200104 +0000 UTC m=+142.633718872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.744903 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvvr\" (UniqueName: \"kubernetes.io/projected/ea6cc7f7-b2fa-40d4-93cd-795a01861ecb-kube-api-access-dwvvr\") pod \"control-plane-machine-set-operator-78cbb6b69f-4gj46\" (UID: \"ea6cc7f7-b2fa-40d4-93cd-795a01861ecb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.768183 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.771961 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chbb9\" (UniqueName: \"kubernetes.io/projected/dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5-kube-api-access-chbb9\") pod \"csi-hostpathplugin-7vxjx\" (UID: \"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5\") " pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.782648 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.788600 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcnrz\" (UniqueName: \"kubernetes.io/projected/7d7f1229-1f55-416b-beeb-60a3ae0abc62-kube-api-access-lcnrz\") pod \"marketplace-operator-79b997595-zw6vx\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.822157 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.822527 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-877dl\" (UniqueName: \"kubernetes.io/projected/45708559-b521-4b8d-a745-12119e61a8cb-kube-api-access-877dl\") pod \"controller-manager-879f6c89f-9hhkc\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.839662 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.840921 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.841143 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.34111108 +0000 UTC m=+142.736629888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.841191 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.841466 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.341453669 +0000 UTC m=+142.736972437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.842054 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d118e366-ab6c-41e1-9aae-c993e9125fd4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8cbn5\" (UID: \"d118e366-ab6c-41e1-9aae-c993e9125fd4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.847624 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.849594 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlg6w\" (UniqueName: \"kubernetes.io/projected/360a44b7-5f42-42d7-918c-226761dbbd2c-kube-api-access-vlg6w\") pod \"machine-config-operator-74547568cd-fr925\" (UID: \"360a44b7-5f42-42d7-918c-226761dbbd2c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.851074 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99"] Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.860194 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.867791 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/846a8d85-beb7-4c48-9705-59ed68378f4c-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-4xshp\" (UID: \"846a8d85-beb7-4c48-9705-59ed68378f4c\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.882339 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.886097 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fj58\" (UniqueName: \"kubernetes.io/projected/88c5b880-561a-4961-91be-bc1ee9bdd96b-kube-api-access-4fj58\") pod \"router-default-5444994796-k9bp4\" (UID: \"88c5b880-561a-4961-91be-bc1ee9bdd96b\") " pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.907481 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m2t7\" (UniqueName: \"kubernetes.io/projected/b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc-kube-api-access-4m2t7\") pod \"catalog-operator-68c6474976-mzk7x\" (UID: \"b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.915998 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.926070 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.927204 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz44h\" (UniqueName: \"kubernetes.io/projected/f436432a-f92b-4b2a-89a9-8014f487dc12-kube-api-access-mz44h\") pod \"collect-profiles-29525085-npsvp\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.945121 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:51 crc kubenswrapper[4833]: E0219 12:48:51.945466 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.445452122 +0000 UTC m=+142.840970890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.949662 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7fts\" (UniqueName: \"kubernetes.io/projected/119b17a5-9014-4cea-b2cc-32e410c88465-kube-api-access-x7fts\") pod \"packageserver-d55dfcdfc-5db89\" (UID: \"119b17a5-9014-4cea-b2cc-32e410c88465\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.961779 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" Feb 19 12:48:51 crc kubenswrapper[4833]: I0219 12:48:51.969318 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xjng4" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.030786 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.043271 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj"] Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.043444 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.056091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.056442 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.056824 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.556811221 +0000 UTC m=+142.952329989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.060845 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.075975 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.092827 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.109933 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d"] Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.157779 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.158118 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.658102306 +0000 UTC m=+143.053621074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.171999 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.205701 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.229874 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-shgxh"] Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.231432 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-wdshf"] Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.261396 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.261847 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.761834783 +0000 UTC m=+143.157353551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.271383 4833 generic.go:334] "Generic (PLEG): container finished" podID="6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6" containerID="a17aac24dc497fba5d4ea7b30b805577af3c9515cb1d3df6bb0994040d5701f0" exitCode=0 Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.272093 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" event={"ID":"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6","Type":"ContainerDied","Data":"a17aac24dc497fba5d4ea7b30b805577af3c9515cb1d3df6bb0994040d5701f0"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.272124 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" event={"ID":"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6","Type":"ContainerStarted","Data":"a1824ab98277e1b1443896979d817f0fc74db1877d818dd0910dace4ec30408c"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.292148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" event={"ID":"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0","Type":"ContainerStarted","Data":"15a6cad99ce9449fb295e8aad95805ccfe82b2a73310ed3d96c71cf0d9bf3a15"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.292185 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" event={"ID":"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0","Type":"ContainerStarted","Data":"d003fab7cbda8e0f9cd24084e1ad0724557bc41e57afafd93966ddc726a2949a"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.300962 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-k855k"] Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.305427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" event={"ID":"a50be962-42d3-4e0a-bf3d-13e00fd679a2","Type":"ContainerStarted","Data":"51b2436ac2c764d6ea47948849ef4e1192079a09d9d7133b14c0e6d9a31dee3c"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.305464 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" event={"ID":"a50be962-42d3-4e0a-bf3d-13e00fd679a2","Type":"ContainerStarted","Data":"edf3ea07d9c681481f8743e279d434706f00e43ced8468cdede5edab09fecc3a"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.361916 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.362276 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.862260907 +0000 UTC m=+143.257779685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.371919 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.371947 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" event={"ID":"17e0b921-253c-44e3-8abd-616a4c22825c","Type":"ContainerStarted","Data":"414c8b353f848dad344c3622b546d19fcb8a6cbde0a833452ab8371321ca24f3"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.371961 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" event={"ID":"17e0b921-253c-44e3-8abd-616a4c22825c","Type":"ContainerStarted","Data":"56831085d07e26d8902e9d8dcafe610b45d8836b5309c21566a26ef29101a9f3"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.371972 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hwh5m" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.371988 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" event={"ID":"8a863328-15b8-46bc-9ffd-faa97add46ea","Type":"ContainerStarted","Data":"2989ef99f97842084f75093190999c93aaafe8bbcd1dcd02ca9c561907fa108d"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372002 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372012 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" event={"ID":"8a863328-15b8-46bc-9ffd-faa97add46ea","Type":"ContainerStarted","Data":"6da3e3a47887ac551261c5405cedb128c8abb3bc7b3e2720a6105745ef9f1220"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372021 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" event={"ID":"e58cf141-d5f2-4832-ab5c-067a3674cbb8","Type":"ContainerStarted","Data":"950b75297727acb82076973da699e4a83b6b05127779ae5d03764037fa569342"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372033 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" event={"ID":"e58cf141-d5f2-4832-ab5c-067a3674cbb8","Type":"ContainerStarted","Data":"cc3108639aa576b6720ced6287f8e6112510c4e3852360931e0b7f7cc6df63cb"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372042 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zjv88" event={"ID":"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63","Type":"ContainerStarted","Data":"f3703f41ba8ea714c509bc765ff5ebde26279bad586041f16c0abda6e48d1678"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372051 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hwh5m" event={"ID":"7792e427-5573-4b37-858e-3c40b4f37505","Type":"ContainerStarted","Data":"d217a7e19dabaae123553ea9902ed8eebd08fb60123974145760e638efc85dc6"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372062 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hwh5m" event={"ID":"7792e427-5573-4b37-858e-3c40b4f37505","Type":"ContainerStarted","Data":"a312fec72ef684dd37f4103bf2e56d930258b21edb5fe657a1c1de72e42561d1"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372093 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" event={"ID":"d665cab8-36f2-4952-b4ca-75f832485488","Type":"ContainerStarted","Data":"2d84242add753c50b6564b1b70ceb9b7b07ce996b042a318015ffaa639d16350"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372120 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" event={"ID":"d665cab8-36f2-4952-b4ca-75f832485488","Type":"ContainerStarted","Data":"dcfcbaba2fb454979a91c317e09b4a57f9e5adbf897e8e9696015d5bb1199cb1"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372132 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" event={"ID":"85474a25-567e-4ef0-be8a-75de8c7d18d9","Type":"ContainerStarted","Data":"750a99609df20ae10225262ae434a4b7e5d30109b002244b29b14374a1142074"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372142 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d" event={"ID":"408486ff-077c-4f21-9c9c-e853669e312f","Type":"ContainerStarted","Data":"dadfbdab00fff2fd5a01498e0d646f1cd94ca062115fb40dad617de1f40e1c80"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372152 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" event={"ID":"b06f1db6-3e1a-4db7-ad72-588f9900223a","Type":"ContainerStarted","Data":"d3db4286055dd710427902f07376c669b63e5a3a5079f68cc37d8c18b9ad1a1d"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372161 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" event={"ID":"5a883fd6-0a70-4705-951f-df5e7b1bb863","Type":"ContainerStarted","Data":"02da89fa6b52a9d03bb42a8b30ed1bd8636a084f57ba76f71233dcb750cfe652"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372172 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" event={"ID":"5a883fd6-0a70-4705-951f-df5e7b1bb863","Type":"ContainerStarted","Data":"9eb421384205733b8488131d121211302a8785e4cbc623a50ccdcdaeed9deba9"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372180 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" event={"ID":"6bcce72d-6a5d-42d2-b7ed-c721057061f6","Type":"ContainerStarted","Data":"f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.372189 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" event={"ID":"6bcce72d-6a5d-42d2-b7ed-c721057061f6","Type":"ContainerStarted","Data":"f95f42bcb62df035a2217d86beb9f3e33e0952a1fdfd8a7adefebdb628a7593e"} Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.395363 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qcdcz"] Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.420631 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwh5m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.420697 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwh5m" podUID="7792e427-5573-4b37-858e-3c40b4f37505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.421376 4833 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vzmp9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.421411 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" podUID="17e0b921-253c-44e3-8abd-616a4c22825c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.463255 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.465401 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:52.965381848 +0000 UTC m=+143.360900726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.490450 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-wqw4k"] Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.501463 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-z4dsv"] Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.518341 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw"] Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.572755 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.072722805 +0000 UTC m=+143.468241573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.572472 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.574605 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.575209 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.075194328 +0000 UTC m=+143.470713096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.677061 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.677211 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.177189811 +0000 UTC m=+143.572708579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.677617 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.677852 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.177839757 +0000 UTC m=+143.573358525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: W0219 12:48:52.690221 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59ef1c2f_bfb4_45af_9ae4_8d0455a5691d.slice/crio-d0fa882ddc4e69f683700f5b29202e29e84db083242ec85ba81f700da3d90ad9 WatchSource:0}: Error finding container d0fa882ddc4e69f683700f5b29202e29e84db083242ec85ba81f700da3d90ad9: Status 404 returned error can't find the container with id d0fa882ddc4e69f683700f5b29202e29e84db083242ec85ba81f700da3d90ad9 Feb 19 12:48:52 crc kubenswrapper[4833]: W0219 12:48:52.699986 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38ba40b9_14c2_419d_a54f_9dbc0f1ada2f.slice/crio-9b56472298ae1f4e22f63f28e500e80db76c15f172fd63002649a551b342cfef WatchSource:0}: Error finding container 9b56472298ae1f4e22f63f28e500e80db76c15f172fd63002649a551b342cfef: Status 404 returned error can't find the container with id 9b56472298ae1f4e22f63f28e500e80db76c15f172fd63002649a551b342cfef Feb 19 12:48:52 crc kubenswrapper[4833]: W0219 12:48:52.725601 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f12eb2_abac_4859_870d_72555b13cda8.slice/crio-7817faeff3cae9c5b3acec04eb6cda913df23b423b2adfede10aeece1aa64074 WatchSource:0}: Error finding container 7817faeff3cae9c5b3acec04eb6cda913df23b423b2adfede10aeece1aa64074: Status 404 returned error can't find the container with id 7817faeff3cae9c5b3acec04eb6cda913df23b423b2adfede10aeece1aa64074 Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.778774 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.779515 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.279485811 +0000 UTC m=+143.675004579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.838937 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.839974 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9"] Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.880359 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.881087 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.381076964 +0000 UTC m=+143.776595732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.981996 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.982105 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.482090163 +0000 UTC m=+143.877608931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:52 crc kubenswrapper[4833]: I0219 12:48:52.982293 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:52 crc kubenswrapper[4833]: E0219 12:48:52.982574 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.482566875 +0000 UTC m=+143.878085643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.084303 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.084663 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.58464614 +0000 UTC m=+143.980164908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.186227 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.186511 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.686485179 +0000 UTC m=+144.082003947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.287115 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.287835 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.787820735 +0000 UTC m=+144.183339503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.377187 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" event={"ID":"87c5cf64-6aee-4660-9847-5161a05a0410","Type":"ContainerStarted","Data":"11d565309089968cd693d71b0bd9645c71a6c5d7097a532d1284c2c2b627dfc6"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.383397 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" event={"ID":"98f12eb2-abac-4859-870d-72555b13cda8","Type":"ContainerStarted","Data":"7817faeff3cae9c5b3acec04eb6cda913df23b423b2adfede10aeece1aa64074"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.383874 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hwh5m" podStartSLOduration=122.383854549 podStartE2EDuration="2m2.383854549s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:53.367829116 +0000 UTC m=+143.763347884" watchObservedRunningTime="2026-02-19 12:48:53.383854549 +0000 UTC m=+143.779373317" Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.418035 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-86hv6"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.420917 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.421304 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:53.921289369 +0000 UTC m=+144.316808137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.423674 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" event={"ID":"61a2a996-37aa-420f-b45e-9776c269d9dd","Type":"ContainerStarted","Data":"0dd4ee4bdc2a4a89181a5a7955e4a7f6b2e4dc9b0c0dbdf5c113c2fa34e5d721"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.437422 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" event={"ID":"27e9c527-e726-412a-ac42-d8b8974f136f","Type":"ContainerStarted","Data":"ad61f8545ac2c7ab58a3cb220b82faefbefe7e2fdac4f177312f0f8128ac68b0"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.437463 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" event={"ID":"27e9c527-e726-412a-ac42-d8b8974f136f","Type":"ContainerStarted","Data":"61e6e86b44b0c34008834cf502732c61d181ec0151e4bde6609ba274cda8279f"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.439138 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7vxjx"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.441426 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntq2z" podStartSLOduration=122.441415905 podStartE2EDuration="2m2.441415905s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:53.439648121 +0000 UTC m=+143.835166889" watchObservedRunningTime="2026-02-19 12:48:53.441415905 +0000 UTC m=+143.836934673" Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.449626 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.455413 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xjng4" event={"ID":"a3b401b0-2191-450a-9d87-e8066678f93b","Type":"ContainerStarted","Data":"58efffdfb42744fa0a88e6bca599636b8159017bd46628ddba9c38980077fc31"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.455668 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xjng4" event={"ID":"a3b401b0-2191-450a-9d87-e8066678f93b","Type":"ContainerStarted","Data":"fd3f6f4b3ed09bfdad96dc5fd96c5e24e3c01c03d06d315cffb161fbd0a89965"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.456054 4833 csr.go:261] certificate signing request csr-9tzzb is approved, waiting to be issued Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.460424 4833 csr.go:257] certificate signing request csr-9tzzb is issued Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.466157 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z4dsv" event={"ID":"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d","Type":"ContainerStarted","Data":"d0fa882ddc4e69f683700f5b29202e29e84db083242ec85ba81f700da3d90ad9"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.477411 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4bk7q"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.478378 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" podStartSLOduration=121.478362524 podStartE2EDuration="2m1.478362524s" podCreationTimestamp="2026-02-19 12:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:53.474071856 +0000 UTC m=+143.869590624" watchObservedRunningTime="2026-02-19 12:48:53.478362524 +0000 UTC m=+143.873881282" Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.487675 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k9bp4" event={"ID":"88c5b880-561a-4961-91be-bc1ee9bdd96b","Type":"ContainerStarted","Data":"bdc6e879bb8ec6c2d9ad61c16da56d6ff568fc9fe8f89b514a8c9a6494825ef6"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.487720 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-k9bp4" event={"ID":"88c5b880-561a-4961-91be-bc1ee9bdd96b","Type":"ContainerStarted","Data":"6de228349cc6b9264c93b5f06fbe426878b9af8b3670817172928ac52e080a27"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.493954 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" event={"ID":"14cacc6a-664d-4560-875d-55e4c731671a","Type":"ContainerStarted","Data":"41f37d30710c07faf3cec016d39579aa4427a93c94c13187aaa6a9058b1b4b0a"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.495435 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" event={"ID":"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f","Type":"ContainerStarted","Data":"9b56472298ae1f4e22f63f28e500e80db76c15f172fd63002649a551b342cfef"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.499717 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wdshf" event={"ID":"8b08cc06-0056-43c0-a73f-9070d99cc0b5","Type":"ContainerStarted","Data":"ca1e78f5603cbca3da9ccd79e4518796d57934f662a3468b1ff92c9276ab20d3"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.499739 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-wdshf" event={"ID":"8b08cc06-0056-43c0-a73f-9070d99cc0b5","Type":"ContainerStarted","Data":"8f505bac9386432ad44b1147310656a0c9fce1d1c91b9339996f08e9a76d3a8b"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.501056 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.502534 4833 generic.go:334] "Generic (PLEG): container finished" podID="85474a25-567e-4ef0-be8a-75de8c7d18d9" containerID="b5f89fdb19a2bdbf843ab3c482522ab80aa67d35af078dabd3b090f2ce7f332b" exitCode=0 Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.505202 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" event={"ID":"85474a25-567e-4ef0-be8a-75de8c7d18d9","Type":"ContainerDied","Data":"b5f89fdb19a2bdbf843ab3c482522ab80aa67d35af078dabd3b090f2ce7f332b"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.516307 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" event={"ID":"b06f1db6-3e1a-4db7-ad72-588f9900223a","Type":"ContainerStarted","Data":"f61ac733d5d235a6d4a745a3ddbdc250bbe51021e86bcf0352a1050ac5d071c8"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.527481 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.527577 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.02756014 +0000 UTC m=+144.423078898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.528281 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.531813 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.031800967 +0000 UTC m=+144.427319735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.538969 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" event={"ID":"749453c5-3458-41d2-b1ab-aca8e018cfd5","Type":"ContainerStarted","Data":"e6913e386b0dd3a32b64f83a410144b999a7cc1ec6ec0a8898f1afb406e57760"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.539014 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" event={"ID":"749453c5-3458-41d2-b1ab-aca8e018cfd5","Type":"ContainerStarted","Data":"6aada2c823ca7287ce0161027dba4ed796403da335ee72b40d412bc7747bdfaa"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.542759 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zjv88" event={"ID":"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63","Type":"ContainerStarted","Data":"c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.544432 4833 generic.go:334] "Generic (PLEG): container finished" podID="78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0" containerID="15a6cad99ce9449fb295e8aad95805ccfe82b2a73310ed3d96c71cf0d9bf3a15" exitCode=0 Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.544540 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" event={"ID":"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0","Type":"ContainerDied","Data":"15a6cad99ce9449fb295e8aad95805ccfe82b2a73310ed3d96c71cf0d9bf3a15"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.562004 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d" event={"ID":"408486ff-077c-4f21-9c9c-e853669e312f","Type":"ContainerStarted","Data":"1aa43cb1c739e32039ae6420b4ac930c439a256393bbc87b67b0cda0c910bd0f"} Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.562931 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-m5wfx" podStartSLOduration=122.562917658 podStartE2EDuration="2m2.562917658s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:53.536738501 +0000 UTC m=+143.932257269" watchObservedRunningTime="2026-02-19 12:48:53.562917658 +0000 UTC m=+143.958436426" Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.565510 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.566486 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwh5m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.566531 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwh5m" podUID="7792e427-5573-4b37-858e-3c40b4f37505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.585934 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lhlvm" podStartSLOduration=122.585916596 podStartE2EDuration="2m2.585916596s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:53.577527846 +0000 UTC m=+143.973046614" watchObservedRunningTime="2026-02-19 12:48:53.585916596 +0000 UTC m=+143.981435364" Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.586869 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.603259 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-lmrs2" podStartSLOduration=122.603241722 podStartE2EDuration="2m2.603241722s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:53.600466152 +0000 UTC m=+143.995984920" watchObservedRunningTime="2026-02-19 12:48:53.603241722 +0000 UTC m=+143.998760490" Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.605471 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.607222 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9hhkc"] Feb 19 12:48:53 crc kubenswrapper[4833]: W0219 12:48:53.607806 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0f8a770_7d1a_430b_8a25_aa325b17c767.slice/crio-d480f880ee1ca1c4e8e96720491be912fd477fac78e8f280b415664eee52fed1 WatchSource:0}: Error finding container d480f880ee1ca1c4e8e96720491be912fd477fac78e8f280b415664eee52fed1: Status 404 returned error can't find the container with id d480f880ee1ca1c4e8e96720491be912fd477fac78e8f280b415664eee52fed1 Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.616747 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.636760 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fr925"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.637141 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.641792 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.642409 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.142383095 +0000 UTC m=+144.537901863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.644242 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.647185 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.147173916 +0000 UTC m=+144.542692674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.654410 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5w7fq" podStartSLOduration=122.654393537 podStartE2EDuration="2m2.654393537s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:53.653480624 +0000 UTC m=+144.048999392" watchObservedRunningTime="2026-02-19 12:48:53.654393537 +0000 UTC m=+144.049912305" Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.685086 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" podStartSLOduration=122.685068738 podStartE2EDuration="2m2.685068738s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:53.682843132 +0000 UTC m=+144.078361900" watchObservedRunningTime="2026-02-19 12:48:53.685068738 +0000 UTC m=+144.080587496" Feb 19 12:48:53 crc kubenswrapper[4833]: W0219 12:48:53.699680 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20fbe5b6_4f07_41fd_a5f8_05c9d2c71089.slice/crio-9a59b37dc0ca35d765f7e06b3c6a4c1da6d3af519c744aa1b909314725d24c4a WatchSource:0}: Error finding container 9a59b37dc0ca35d765f7e06b3c6a4c1da6d3af519c744aa1b909314725d24c4a: Status 404 returned error can't find the container with id 9a59b37dc0ca35d765f7e06b3c6a4c1da6d3af519c744aa1b909314725d24c4a Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.702245 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.705211 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zw6vx"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.756532 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.756661 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.256631205 +0000 UTC m=+144.652149973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.756780 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.758398 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.258378819 +0000 UTC m=+144.653897577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.787195 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.820418 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zjv88" podStartSLOduration=122.820387818 podStartE2EDuration="2m2.820387818s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:53.790864416 +0000 UTC m=+144.186383184" watchObservedRunningTime="2026-02-19 12:48:53.820387818 +0000 UTC m=+144.215906586" Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.826646 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.829185 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5"] Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.872378 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.872780 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.372747123 +0000 UTC m=+144.768265901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:53 crc kubenswrapper[4833]: W0219 12:48:53.900826 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1bfd0e4_a923_4520_bd3c_df5aa3e2dfcc.slice/crio-754e860413f708461856ebeac2ffeb82b3e8ae6e331456df07640e111f5f6bd4 WatchSource:0}: Error finding container 754e860413f708461856ebeac2ffeb82b3e8ae6e331456df07640e111f5f6bd4: Status 404 returned error can't find the container with id 754e860413f708461856ebeac2ffeb82b3e8ae6e331456df07640e111f5f6bd4 Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.927644 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" podStartSLOduration=122.927620832 podStartE2EDuration="2m2.927620832s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:53.890997722 +0000 UTC m=+144.286516490" watchObservedRunningTime="2026-02-19 12:48:53.927620832 +0000 UTC m=+144.323139610" Feb 19 12:48:53 crc kubenswrapper[4833]: W0219 12:48:53.967208 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod119b17a5_9014_4cea_b2cc_32e410c88465.slice/crio-e8a797c7de06232081404c22ec13c2b106f2ab0ceab61b44767f9abbc9da8c56 WatchSource:0}: Error finding container e8a797c7de06232081404c22ec13c2b106f2ab0ceab61b44767f9abbc9da8c56: Status 404 returned error can't find the container with id e8a797c7de06232081404c22ec13c2b106f2ab0ceab61b44767f9abbc9da8c56 Feb 19 12:48:53 crc kubenswrapper[4833]: I0219 12:48:53.973584 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:53 crc kubenswrapper[4833]: E0219 12:48:53.973933 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.473912216 +0000 UTC m=+144.869431014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.074961 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:54 crc kubenswrapper[4833]: E0219 12:48:54.075682 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.575659832 +0000 UTC m=+144.971178600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.079865 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.080870 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-wdshf" podStartSLOduration=6.080848863 podStartE2EDuration="6.080848863s" podCreationTimestamp="2026-02-19 12:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:54.080146635 +0000 UTC m=+144.475665403" watchObservedRunningTime="2026-02-19 12:48:54.080848863 +0000 UTC m=+144.476367631" Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.092185 4833 patch_prober.go:28] interesting pod/router-default-5444994796-k9bp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 12:48:54 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 19 12:48:54 crc kubenswrapper[4833]: [+]process-running ok Feb 19 12:48:54 crc kubenswrapper[4833]: healthz check failed Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.092245 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k9bp4" podUID="88c5b880-561a-4961-91be-bc1ee9bdd96b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.179325 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:54 crc kubenswrapper[4833]: E0219 12:48:54.179750 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.679733158 +0000 UTC m=+145.075251926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.256773 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-k9bp4" podStartSLOduration=123.256759283 podStartE2EDuration="2m3.256759283s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:54.255004479 +0000 UTC m=+144.650523257" watchObservedRunningTime="2026-02-19 12:48:54.256759283 +0000 UTC m=+144.652278041" Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.283437 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:54 crc kubenswrapper[4833]: E0219 12:48:54.283664 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.783635969 +0000 UTC m=+145.179154737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.284599 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:54 crc kubenswrapper[4833]: E0219 12:48:54.285146 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.785133676 +0000 UTC m=+145.180652444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.386291 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:54 crc kubenswrapper[4833]: E0219 12:48:54.386687 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.886658138 +0000 UTC m=+145.282176906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.387014 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:54 crc kubenswrapper[4833]: E0219 12:48:54.387318 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.887307804 +0000 UTC m=+145.282826572 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.463106 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-19 12:43:53 +0000 UTC, rotation deadline is 2026-11-15 10:10:10.007619188 +0000 UTC Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.463154 4833 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6453h21m15.544468788s for next certificate rotation Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.491108 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:54 crc kubenswrapper[4833]: E0219 12:48:54.491828 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:54.99180859 +0000 UTC m=+145.387327358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.529283 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d" podStartSLOduration=123.529260501 podStartE2EDuration="2m3.529260501s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:54.523484856 +0000 UTC m=+144.919003624" watchObservedRunningTime="2026-02-19 12:48:54.529260501 +0000 UTC m=+144.924779269" Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.580821 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xjng4" podStartSLOduration=6.580802306 podStartE2EDuration="6.580802306s" podCreationTimestamp="2026-02-19 12:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:54.5761685 +0000 UTC m=+144.971687258" watchObservedRunningTime="2026-02-19 12:48:54.580802306 +0000 UTC m=+144.976321074" Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.592426 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:54 crc kubenswrapper[4833]: E0219 12:48:54.592839 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:55.092824908 +0000 UTC m=+145.488343676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.592976 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" event={"ID":"27e9c527-e726-412a-ac42-d8b8974f136f","Type":"ContainerStarted","Data":"31d24d1646f00dbd14e4e236061ac3d6781a3bc079fda0195ae21b6791602fca"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.619086 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-k855k" event={"ID":"749453c5-3458-41d2-b1ab-aca8e018cfd5","Type":"ContainerStarted","Data":"83e8c4174f1f1f94e4724bf5e24653a899a18dc837a8e312c6e8a1f134444fe2"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.651175 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" event={"ID":"45708559-b521-4b8d-a745-12119e61a8cb","Type":"ContainerStarted","Data":"9419e0ff9d37ca16a44b41ee340e1efe76ff71bf04ff01f37318098baf953111"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.664754 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" event={"ID":"61a2a996-37aa-420f-b45e-9776c269d9dd","Type":"ContainerStarted","Data":"6b7a28b2f56b47b720e710bddab942394f837b7c40c91ac2bff572222c73eaed"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.693863 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:54 crc kubenswrapper[4833]: E0219 12:48:54.695342 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:55.195319834 +0000 UTC m=+145.590838622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.718525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-flg7d" event={"ID":"408486ff-077c-4f21-9c9c-e853669e312f","Type":"ContainerStarted","Data":"12cce2cbc6e0ecdeb1ea20b360d3d58b4a55cf9b7b6a65e3c7aab2f24b4bf00f"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.720154 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" event={"ID":"e0f8a770-7d1a-430b-8a25-aa325b17c767","Type":"ContainerStarted","Data":"d480f880ee1ca1c4e8e96720491be912fd477fac78e8f280b415664eee52fed1"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.721407 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" event={"ID":"846a8d85-beb7-4c48-9705-59ed68378f4c","Type":"ContainerStarted","Data":"d37f5537e3b78bb0cd976e9c85a2069001171a2ea67f6d055e946cf03fa810b6"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.722870 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" event={"ID":"85474a25-567e-4ef0-be8a-75de8c7d18d9","Type":"ContainerStarted","Data":"b12aee796dd4f805194ca04ad8342936cfaab0420897df4f0866ad4a155e09a9"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.726895 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" event={"ID":"ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb","Type":"ContainerStarted","Data":"6f146b243b5383f5299addbbc3a41453952c2e5ec6616fe01cbffed7cb117a93"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.726914 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" event={"ID":"ee0e0b2d-bf8a-4d70-b85f-b21bd59baaeb","Type":"ContainerStarted","Data":"ba8ab71ee9332fe0c82f19e9fcc85dbf1f47d638ff171d03d69c453dbdcd5c10"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.738891 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" event={"ID":"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089","Type":"ContainerStarted","Data":"c5de43a161b14b6106efddea5829fad0355a9b5c561e399bf70dd8d43024ea9f"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.738925 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" event={"ID":"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089","Type":"ContainerStarted","Data":"9a59b37dc0ca35d765f7e06b3c6a4c1da6d3af519c744aa1b909314725d24c4a"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.740218 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" event={"ID":"7d7f1229-1f55-416b-beeb-60a3ae0abc62","Type":"ContainerStarted","Data":"01ba816664db5c627e659a4e9a605787487e909ae517f911f3528d4ded6f5b7c"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.740920 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.742032 4833 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zw6vx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.742063 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" podUID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.744258 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" event={"ID":"6f3dd9d5-f3cb-47d9-9c43-d530fe20a2b6","Type":"ContainerStarted","Data":"f768e48faad407784d689d30ea5999351fb7aa977495082f2ddde560ed23c86a"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.745395 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" event={"ID":"d118e366-ab6c-41e1-9aae-c993e9125fd4","Type":"ContainerStarted","Data":"6a38ccb2c61b5538b0f2a0988a2b66780dea55141f50282cd291fc7add512510"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.748288 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" event={"ID":"98f12eb2-abac-4859-870d-72555b13cda8","Type":"ContainerStarted","Data":"42e45c136e560dd65755fe674b0352e39127e26777aecc9a38ff2edaaf93cd3e"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.749786 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" event={"ID":"b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc","Type":"ContainerStarted","Data":"754e860413f708461856ebeac2ffeb82b3e8ae6e331456df07640e111f5f6bd4"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.757435 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" event={"ID":"14cacc6a-664d-4560-875d-55e4c731671a","Type":"ContainerStarted","Data":"f61b7cdeaa8820117cbb39679ee3846244bbe3b093aa694975d76d7d9ce33eb6"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.763148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" event={"ID":"6cee23cd-f0ff-4954-9497-69b2097a34f1","Type":"ContainerStarted","Data":"b354c60645babae9e83b3bd1d6e92d2d2cbf8f79307469b3703a3c37ff458788"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.774954 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" event={"ID":"87c5cf64-6aee-4660-9847-5161a05a0410","Type":"ContainerStarted","Data":"bfd6714ac1d41d965e26b4f1071cdb00afa47c90affd07a4963d9c709527e92d"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.787197 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" event={"ID":"38ba40b9-14c2-419d-a54f-9dbc0f1ada2f","Type":"ContainerStarted","Data":"b6f409b59d235cccb0630e536415f2bf5511f85dff8b8d1f7670e252c3cd33ef"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.789154 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" event={"ID":"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5","Type":"ContainerStarted","Data":"169dd62a211993bc596ce9e32463c7f3be705bc538d81349e88c46e4757b41d5"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.790299 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" event={"ID":"360a44b7-5f42-42d7-918c-226761dbbd2c","Type":"ContainerStarted","Data":"c16d26f7d2388b379b9b195b0807b0c87df7a42fe98f96d2830946abc90465e8"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.794948 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:54 crc kubenswrapper[4833]: E0219 12:48:54.796583 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:55.296563848 +0000 UTC m=+145.692082736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.816463 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" event={"ID":"c314e60b-4099-42e8-9eff-e3ef54025cc3","Type":"ContainerStarted","Data":"46df7b3ec9589f42800eac63f30a5264755fd9f80ee0c231701d2f8c096c29d7"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.816528 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" event={"ID":"c314e60b-4099-42e8-9eff-e3ef54025cc3","Type":"ContainerStarted","Data":"1d10201732b767330506c5236f0f050db14da7de2ba7fd17f048b43133a465ef"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.826617 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" event={"ID":"78fc8d4f-ecae-4d57-b0a6-4a31751eb3c0","Type":"ContainerStarted","Data":"f47c90b8e205badc03185e40b11d8402db7cf1918b9c1780237108850055bf2f"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.826932 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.833088 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" event={"ID":"f436432a-f92b-4b2a-89a9-8014f487dc12","Type":"ContainerStarted","Data":"b07ed14e35ba092e38f1f9d40089d8df4020b768a3e9fe5355fb6e57f7423d66"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.833152 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" event={"ID":"f436432a-f92b-4b2a-89a9-8014f487dc12","Type":"ContainerStarted","Data":"25fcd9a755e8483605aac5d466a586546e35a9e18ad756af0296055e187486bf"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.844094 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nwx99" podStartSLOduration=123.844080052 podStartE2EDuration="2m3.844080052s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:54.843283032 +0000 UTC m=+145.238801790" watchObservedRunningTime="2026-02-19 12:48:54.844080052 +0000 UTC m=+145.239598820" Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.845742 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46" event={"ID":"ea6cc7f7-b2fa-40d4-93cd-795a01861ecb","Type":"ContainerStarted","Data":"436be4b4bf3bdd38ef1f9d7af2d563d98032a70a5704ce08beabe3311afdab57"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.877617 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-86hv6" event={"ID":"ab5bee17-16ff-4e1b-9868-69443e2b10d4","Type":"ContainerStarted","Data":"170e86a07a8165129009969ef812260586198f5ab060ee0ca551b111b8416da5"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.877688 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-86hv6" event={"ID":"ab5bee17-16ff-4e1b-9868-69443e2b10d4","Type":"ContainerStarted","Data":"f58c7062c6e36ec052c1aecc86cb591fe8b5b6c8c9ae92f8a9d66cb19450171f"} Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.896028 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.896170 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" podStartSLOduration=123.896160621 podStartE2EDuration="2m3.896160621s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:54.895164496 +0000 UTC m=+145.290683264" watchObservedRunningTime="2026-02-19 12:48:54.896160621 +0000 UTC m=+145.291679389" Feb 19 12:48:54 crc kubenswrapper[4833]: E0219 12:48:54.896689 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:55.396666874 +0000 UTC m=+145.792185642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:54 crc kubenswrapper[4833]: I0219 12:48:54.912540 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" podStartSLOduration=123.912521922 podStartE2EDuration="2m3.912521922s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:54.911390014 +0000 UTC m=+145.306908782" watchObservedRunningTime="2026-02-19 12:48:54.912521922 +0000 UTC m=+145.308040690" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:54.937568 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-z4dsv" event={"ID":"59ef1c2f-bfb4-45af-9ae4-8d0455a5691d","Type":"ContainerStarted","Data":"c662a10b0c17a49af7d7953c6f8baa47d9d5841b13d148a49d799c917826ea81"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:54.938170 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:54.940561 4833 patch_prober.go:28] interesting pod/console-operator-58897d9998-z4dsv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:54.940610 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-z4dsv" podUID="59ef1c2f-bfb4-45af-9ae4-8d0455a5691d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:54.950025 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" event={"ID":"119b17a5-9014-4cea-b2cc-32e410c88465","Type":"ContainerStarted","Data":"e8a797c7de06232081404c22ec13c2b106f2ab0ceab61b44767f9abbc9da8c56"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:54.954543 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwh5m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:54.954589 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwh5m" podUID="7792e427-5573-4b37-858e-3c40b4f37505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:54.972952 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" podStartSLOduration=123.972936871 podStartE2EDuration="2m3.972936871s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:54.942364072 +0000 UTC m=+145.337882840" watchObservedRunningTime="2026-02-19 12:48:54.972936871 +0000 UTC m=+145.368455639" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:54.996430 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qdwvj" podStartSLOduration=123.99641201 podStartE2EDuration="2m3.99641201s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:54.973887254 +0000 UTC m=+145.369406022" watchObservedRunningTime="2026-02-19 12:48:54.99641201 +0000 UTC m=+145.391930778" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:54.998865 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:54.999180 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9r2j9" podStartSLOduration=122.99916652 podStartE2EDuration="2m2.99916652s" podCreationTimestamp="2026-02-19 12:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:54.998244816 +0000 UTC m=+145.393763574" watchObservedRunningTime="2026-02-19 12:48:54.99916652 +0000 UTC m=+145.394685288" Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:54.999223 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:55.499210411 +0000 UTC m=+145.894729179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.081573 4833 patch_prober.go:28] interesting pod/router-default-5444994796-k9bp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 12:48:55 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 19 12:48:55 crc kubenswrapper[4833]: [+]process-running ok Feb 19 12:48:55 crc kubenswrapper[4833]: healthz check failed Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.081618 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k9bp4" podUID="88c5b880-561a-4961-91be-bc1ee9bdd96b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.101001 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.101108 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:55.601086781 +0000 UTC m=+145.996605539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.101828 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.103026 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:55.603008839 +0000 UTC m=+145.998527597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.108245 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46" podStartSLOduration=124.10823332 podStartE2EDuration="2m4.10823332s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:55.068997624 +0000 UTC m=+145.464516392" watchObservedRunningTime="2026-02-19 12:48:55.10823332 +0000 UTC m=+145.503752078" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.113528 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-wqw4k" podStartSLOduration=124.113518563 podStartE2EDuration="2m4.113518563s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:55.102420904 +0000 UTC m=+145.497939672" watchObservedRunningTime="2026-02-19 12:48:55.113518563 +0000 UTC m=+145.509037331" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.160060 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" podStartSLOduration=124.160039132 podStartE2EDuration="2m4.160039132s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:55.159129849 +0000 UTC m=+145.554648617" watchObservedRunningTime="2026-02-19 12:48:55.160039132 +0000 UTC m=+145.555557900" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.194139 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mcmtw" podStartSLOduration=124.194120679 podStartE2EDuration="2m4.194120679s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:55.193248737 +0000 UTC m=+145.588767505" watchObservedRunningTime="2026-02-19 12:48:55.194120679 +0000 UTC m=+145.589639437" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.207705 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.207982 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:55.707961257 +0000 UTC m=+146.103480025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.208339 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.208828 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:55.708715605 +0000 UTC m=+146.104234373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.223087 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" podStartSLOduration=123.223072146 podStartE2EDuration="2m3.223072146s" podCreationTimestamp="2026-02-19 12:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:55.219058265 +0000 UTC m=+145.614577033" watchObservedRunningTime="2026-02-19 12:48:55.223072146 +0000 UTC m=+145.618590914" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.250051 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rzdxr" podStartSLOduration=124.250032384 podStartE2EDuration="2m4.250032384s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:55.244038513 +0000 UTC m=+145.639557271" watchObservedRunningTime="2026-02-19 12:48:55.250032384 +0000 UTC m=+145.645551152" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.294481 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" podStartSLOduration=124.29446445 podStartE2EDuration="2m4.29446445s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:55.293712301 +0000 UTC m=+145.689231069" watchObservedRunningTime="2026-02-19 12:48:55.29446445 +0000 UTC m=+145.689983218" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.314116 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.314568 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:55.814544365 +0000 UTC m=+146.210063123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.334250 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-z4dsv" podStartSLOduration=124.33423388 podStartE2EDuration="2m4.33423388s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:55.333686236 +0000 UTC m=+145.729205004" watchObservedRunningTime="2026-02-19 12:48:55.33423388 +0000 UTC m=+145.729752648" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.415385 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.415724 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:55.915708947 +0000 UTC m=+146.311227705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.516814 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.516943 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.016925181 +0000 UTC m=+146.412443949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.517321 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.517667 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.017651849 +0000 UTC m=+146.413170617 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.618240 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.618434 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.118404121 +0000 UTC m=+146.513922879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.618533 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.618840 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.118829261 +0000 UTC m=+146.514348029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.719443 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.719619 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.219595794 +0000 UTC m=+146.615114562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.719743 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.720032 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.220023874 +0000 UTC m=+146.615542632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.820950 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.821150 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.321117335 +0000 UTC m=+146.716636103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.821386 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.821731 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.32172227 +0000 UTC m=+146.717241038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.922814 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:55 crc kubenswrapper[4833]: E0219 12:48:55.923188 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.423174019 +0000 UTC m=+146.818692787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.958000 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" event={"ID":"c314e60b-4099-42e8-9eff-e3ef54025cc3","Type":"ContainerStarted","Data":"1e4e636f3cbc86d24e61ca0f48265bfcfb7cea1564c07a614a50cd3d19906da3"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.958865 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.960565 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4gj46" event={"ID":"ea6cc7f7-b2fa-40d4-93cd-795a01861ecb","Type":"ContainerStarted","Data":"9d63cf86b2e8420aa5139dda9b8430a5a22dc0c173ba6db43c904fc527b009fd"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.965905 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-86hv6" event={"ID":"ab5bee17-16ff-4e1b-9868-69443e2b10d4","Type":"ContainerStarted","Data":"a53b93c1da596990a5e2f5a07fc9aa30f5982dadde52e1418dd08a63d549e949"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.966246 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-86hv6" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.967396 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" event={"ID":"45708559-b521-4b8d-a745-12119e61a8cb","Type":"ContainerStarted","Data":"dc1b0d0e120716fb2e2dd6c46be86f55d88d89a3ba5e62274a6416dab09b03ed"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.967965 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.968968 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9hhkc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.969020 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" podUID="45708559-b521-4b8d-a745-12119e61a8cb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.969244 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" event={"ID":"6cee23cd-f0ff-4954-9497-69b2097a34f1","Type":"ContainerStarted","Data":"a04384b2345341dfc4f763b0c0c7cb1f822834dfb6a94075e8f912fb3a8829d1"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.969554 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.972297 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" event={"ID":"7d7f1229-1f55-416b-beeb-60a3ae0abc62","Type":"ContainerStarted","Data":"10bbac755570dda634d27a65383cd694922862579fc50dc9a9c28b42321c7899"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.972984 4833 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zw6vx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.973011 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" podUID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.973706 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" event={"ID":"20fbe5b6-4f07-41fd-a5f8-05c9d2c71089","Type":"ContainerStarted","Data":"6113b26917fc9a0529c9f39314799bac71897716a5a690e2c4e5bed9b46b71f8"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.975116 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" event={"ID":"119b17a5-9014-4cea-b2cc-32e410c88465","Type":"ContainerStarted","Data":"0b66147ff513445007aae16c8d45b5cc1b0d06b5913fa0e40c3593e67f03a6c8"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.975677 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.979215 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" event={"ID":"b1bfd0e4-a923-4520-bd3c-df5aa3e2dfcc","Type":"ContainerStarted","Data":"c2bed0ba75d3b07ecb8ccb657bde977499c74e520c8ed49883576e5f1217efc8"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.979365 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" podStartSLOduration=124.979353861 podStartE2EDuration="2m4.979353861s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:55.977936766 +0000 UTC m=+146.373455534" watchObservedRunningTime="2026-02-19 12:48:55.979353861 +0000 UTC m=+146.374872629" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.980033 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.983200 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.984188 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" event={"ID":"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5","Type":"ContainerStarted","Data":"65e0980233406265660a0420d12ef46899e2e8b0e2e74ab1fc5b3f843ab45391"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.986322 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" event={"ID":"360a44b7-5f42-42d7-918c-226761dbbd2c","Type":"ContainerStarted","Data":"949534de8f6a03972ad83cacb092e46f63e00cdf2a8283ec715e2a1244eaa639"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.986350 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" event={"ID":"360a44b7-5f42-42d7-918c-226761dbbd2c","Type":"ContainerStarted","Data":"a28b05a2e985adb2549ace8a791147730725e332326a7cb61775e8c253836c79"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.988111 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-shgxh" event={"ID":"87c5cf64-6aee-4660-9847-5161a05a0410","Type":"ContainerStarted","Data":"1b877aced96de572f2e18a580813f75af6a36fcde63f4a970e2e348b5577a7f6"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.989146 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.989375 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.990540 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4bk7q" event={"ID":"e0f8a770-7d1a-430b-8a25-aa325b17c767","Type":"ContainerStarted","Data":"801cb2698ba90a6fec47f14ece5e95ff307aa1dfccb219d524d20d5b6320ce15"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.994200 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.995660 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" event={"ID":"846a8d85-beb7-4c48-9705-59ed68378f4c","Type":"ContainerStarted","Data":"9b4d4b089b2df0879ba4491c7b57f279e72c80535fe2c8580d493ca24e0886d9"} Feb 19 12:48:55 crc kubenswrapper[4833]: I0219 12:48:55.997988 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" event={"ID":"14cacc6a-664d-4560-875d-55e4c731671a","Type":"ContainerStarted","Data":"81c3f91fc526d5f54bda395c9a41aefd631397814da432bc856813d6d6a9b022"} Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.003145 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" event={"ID":"85474a25-567e-4ef0-be8a-75de8c7d18d9","Type":"ContainerStarted","Data":"88363bbb50a6e7b5d4b8f15b201cd14e670efca2b524c4641f43b9b27c4c17ef"} Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.003425 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.009287 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" event={"ID":"d118e366-ab6c-41e1-9aae-c993e9125fd4","Type":"ContainerStarted","Data":"f25319ef2212cfd935a294cb54e22fb02b3d64bd81baafef3e61964c9e97dda9"} Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.018132 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-krb8b" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.024227 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.024685 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-86hv6" podStartSLOduration=8.02466572 podStartE2EDuration="8.02466572s" podCreationTimestamp="2026-02-19 12:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:56.006637417 +0000 UTC m=+146.402156185" watchObservedRunningTime="2026-02-19 12:48:56.02466572 +0000 UTC m=+146.420184488" Feb 19 12:48:56 crc kubenswrapper[4833]: E0219 12:48:56.025786 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.525772328 +0000 UTC m=+146.921291096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.046093 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" podStartSLOduration=125.046074468 podStartE2EDuration="2m5.046074468s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:56.043723759 +0000 UTC m=+146.439242527" watchObservedRunningTime="2026-02-19 12:48:56.046074468 +0000 UTC m=+146.441593236" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.046194 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-bhmhd" podStartSLOduration=125.046190651 podStartE2EDuration="2m5.046190651s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:56.023546472 +0000 UTC m=+146.419065240" watchObservedRunningTime="2026-02-19 12:48:56.046190651 +0000 UTC m=+146.441709419" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.071471 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" podStartSLOduration=125.071450566 podStartE2EDuration="2m5.071450566s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:56.070898282 +0000 UTC m=+146.466417040" watchObservedRunningTime="2026-02-19 12:48:56.071450566 +0000 UTC m=+146.466969344" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.080651 4833 patch_prober.go:28] interesting pod/router-default-5444994796-k9bp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 12:48:56 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 19 12:48:56 crc kubenswrapper[4833]: [+]process-running ok Feb 19 12:48:56 crc kubenswrapper[4833]: healthz check failed Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.080697 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k9bp4" podUID="88c5b880-561a-4961-91be-bc1ee9bdd96b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.107015 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-nlrpv" podStartSLOduration=125.106998049 podStartE2EDuration="2m5.106998049s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:56.104740982 +0000 UTC m=+146.500259750" watchObservedRunningTime="2026-02-19 12:48:56.106998049 +0000 UTC m=+146.502516817" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.125990 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:56 crc kubenswrapper[4833]: E0219 12:48:56.126247 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.626220952 +0000 UTC m=+147.021739720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.131454 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:56 crc kubenswrapper[4833]: E0219 12:48:56.131707 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.63169919 +0000 UTC m=+147.027217958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.155606 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.155663 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.169965 4833 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cm4vr container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.170284 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" podUID="85474a25-567e-4ef0-be8a-75de8c7d18d9" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.155477 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fr925" podStartSLOduration=125.155454516 podStartE2EDuration="2m5.155454516s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:56.133743781 +0000 UTC m=+146.529262549" watchObservedRunningTime="2026-02-19 12:48:56.155454516 +0000 UTC m=+146.550973284" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.177852 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-4xshp" podStartSLOduration=125.177827609 podStartE2EDuration="2m5.177827609s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:56.174935176 +0000 UTC m=+146.570453944" watchObservedRunningTime="2026-02-19 12:48:56.177827609 +0000 UTC m=+146.573346377" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.235245 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:56 crc kubenswrapper[4833]: E0219 12:48:56.235670 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.735640182 +0000 UTC m=+147.131158950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.297084 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8cbn5" podStartSLOduration=125.297069935 podStartE2EDuration="2m5.297069935s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:56.29568785 +0000 UTC m=+146.691206618" watchObservedRunningTime="2026-02-19 12:48:56.297069935 +0000 UTC m=+146.692588703" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.336282 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:56 crc kubenswrapper[4833]: E0219 12:48:56.336530 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.836517237 +0000 UTC m=+147.232036005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.364849 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" podStartSLOduration=125.364829098 podStartE2EDuration="2m5.364829098s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:56.355143305 +0000 UTC m=+146.750662073" watchObservedRunningTime="2026-02-19 12:48:56.364829098 +0000 UTC m=+146.760347866" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.412528 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qcdcz" podStartSLOduration=125.412513656 podStartE2EDuration="2m5.412513656s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:56.404216758 +0000 UTC m=+146.799735526" watchObservedRunningTime="2026-02-19 12:48:56.412513656 +0000 UTC m=+146.808032424" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.438007 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:56 crc kubenswrapper[4833]: E0219 12:48:56.438364 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:56.938349576 +0000 UTC m=+147.333868344 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.539755 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:56 crc kubenswrapper[4833]: E0219 12:48:56.540125 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.040107933 +0000 UTC m=+147.435626701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.558832 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-mzk7x" podStartSLOduration=125.558813523 podStartE2EDuration="2m5.558813523s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:56.55792816 +0000 UTC m=+146.953446928" watchObservedRunningTime="2026-02-19 12:48:56.558813523 +0000 UTC m=+146.954332291" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.641128 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:56 crc kubenswrapper[4833]: E0219 12:48:56.641325 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.141301836 +0000 UTC m=+147.536820594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.743107 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:56 crc kubenswrapper[4833]: E0219 12:48:56.743611 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.243587686 +0000 UTC m=+147.639106454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.753832 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-z4dsv" Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.844429 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:56 crc kubenswrapper[4833]: E0219 12:48:56.844788 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.344773579 +0000 UTC m=+147.740292337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.946587 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:56 crc kubenswrapper[4833]: E0219 12:48:56.947120 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.44710493 +0000 UTC m=+147.842623698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.976057 4833 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5db89 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 12:48:56 crc kubenswrapper[4833]: I0219 12:48:56.976115 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" podUID="119b17a5-9014-4cea-b2cc-32e410c88465" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.26:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.022313 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" event={"ID":"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5","Type":"ContainerStarted","Data":"201e37fd3a35e8bb6bd9d24410c4612705c5d4f55f7969635210ce88891d758f"} Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.022743 4833 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zw6vx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.022781 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" podUID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": dial tcp 10.217.0.27:8080: connect: connection refused" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.030295 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.048032 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.048189 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.54816541 +0000 UTC m=+147.943684178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.048389 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.048770 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.548757605 +0000 UTC m=+147.944276373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.053875 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5db89" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.061564 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2kph" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.086612 4833 patch_prober.go:28] interesting pod/router-default-5444994796-k9bp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 12:48:57 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 19 12:48:57 crc kubenswrapper[4833]: [+]process-running ok Feb 19 12:48:57 crc kubenswrapper[4833]: healthz check failed Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.086662 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k9bp4" podUID="88c5b880-561a-4961-91be-bc1ee9bdd96b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.148936 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.149119 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.649093116 +0000 UTC m=+148.044611884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.149409 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.151676 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.651662111 +0000 UTC m=+148.047180879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.251252 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.251447 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.751417077 +0000 UTC m=+148.146935845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.251509 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.251758 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.751745996 +0000 UTC m=+148.147264764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.352994 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.353209 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.853181664 +0000 UTC m=+148.248700432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.353296 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.353598 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.853586724 +0000 UTC m=+148.249105492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.454823 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.455023 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.954995602 +0000 UTC m=+148.350514370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.455267 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.455620 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:57.955608428 +0000 UTC m=+148.351127196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.556298 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.556529 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.056505873 +0000 UTC m=+148.452024641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.556630 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.556942 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.056930084 +0000 UTC m=+148.452448852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.657732 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.657912 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.157887711 +0000 UTC m=+148.553406479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.658220 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.658533 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.158518767 +0000 UTC m=+148.554037535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.658966 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zfz65"] Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.659967 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.664138 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.680604 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zfz65"] Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.759755 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.759935 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.259909705 +0000 UTC m=+148.655428473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.760016 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.760176 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-catalog-content\") pod \"certified-operators-zfz65\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.760231 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9dss\" (UniqueName: \"kubernetes.io/projected/248c3a65-f82e-475e-9d61-502028f6c2cc-kube-api-access-t9dss\") pod \"certified-operators-zfz65\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.760270 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-utilities\") pod \"certified-operators-zfz65\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.760347 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.260333455 +0000 UTC m=+148.655852223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.854699 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jkz78"] Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.855571 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.859145 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.860835 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.860994 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9dss\" (UniqueName: \"kubernetes.io/projected/248c3a65-f82e-475e-9d61-502028f6c2cc-kube-api-access-t9dss\") pod \"certified-operators-zfz65\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.861031 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-utilities\") pod \"certified-operators-zfz65\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.861121 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-catalog-content\") pod \"certified-operators-zfz65\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.861517 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-catalog-content\") pod \"certified-operators-zfz65\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.861588 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.361574679 +0000 UTC m=+148.757093447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.861985 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-utilities\") pod \"certified-operators-zfz65\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.895902 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9dss\" (UniqueName: \"kubernetes.io/projected/248c3a65-f82e-475e-9d61-502028f6c2cc-kube-api-access-t9dss\") pod \"certified-operators-zfz65\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.925410 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jkz78"] Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.961822 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-utilities\") pod \"community-operators-jkz78\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.961867 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-catalog-content\") pod \"community-operators-jkz78\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.961888 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.961914 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql2nm\" (UniqueName: \"kubernetes.io/projected/3a9e548c-9edf-4dc6-83a7-4f07f6960721-kube-api-access-ql2nm\") pod \"community-operators-jkz78\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:48:57 crc kubenswrapper[4833]: E0219 12:48:57.962212 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.462201498 +0000 UTC m=+148.857720266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:57 crc kubenswrapper[4833]: I0219 12:48:57.972121 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.043547 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" event={"ID":"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5","Type":"ContainerStarted","Data":"4821986f03fca562d0e35081f5aedd8c82f3087d8d869680b72ab17135aaae3f"} Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.058440 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ssllf"] Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.059291 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.064080 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.064241 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-utilities\") pod \"community-operators-jkz78\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.064269 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-catalog-content\") pod \"community-operators-jkz78\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.064306 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql2nm\" (UniqueName: \"kubernetes.io/projected/3a9e548c-9edf-4dc6-83a7-4f07f6960721-kube-api-access-ql2nm\") pod \"community-operators-jkz78\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:48:58 crc kubenswrapper[4833]: E0219 12:48:58.064683 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.564670403 +0000 UTC m=+148.960189171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.065072 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-utilities\") pod \"community-operators-jkz78\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.066697 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-catalog-content\") pod \"community-operators-jkz78\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.081454 4833 patch_prober.go:28] interesting pod/router-default-5444994796-k9bp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 12:48:58 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 19 12:48:58 crc kubenswrapper[4833]: [+]process-running ok Feb 19 12:48:58 crc kubenswrapper[4833]: healthz check failed Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.081533 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k9bp4" podUID="88c5b880-561a-4961-91be-bc1ee9bdd96b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.104970 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql2nm\" (UniqueName: \"kubernetes.io/projected/3a9e548c-9edf-4dc6-83a7-4f07f6960721-kube-api-access-ql2nm\") pod \"community-operators-jkz78\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.122522 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ssllf"] Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.165457 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-catalog-content\") pod \"certified-operators-ssllf\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.168401 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.172215 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-utilities\") pod \"certified-operators-ssllf\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.172316 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.172372 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdcq7\" (UniqueName: \"kubernetes.io/projected/b5adb7ca-e392-4fff-aad0-078c4b6de62e-kube-api-access-mdcq7\") pod \"certified-operators-ssllf\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:48:58 crc kubenswrapper[4833]: E0219 12:48:58.173411 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.673399955 +0000 UTC m=+149.068918713 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.244244 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4nnkn"] Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.249302 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.261092 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nnkn"] Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.273897 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.274161 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-catalog-content\") pod \"certified-operators-ssllf\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:48:58 crc kubenswrapper[4833]: E0219 12:48:58.274227 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.774201639 +0000 UTC m=+149.169720407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.274410 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-utilities\") pod \"certified-operators-ssllf\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.274470 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.274557 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdcq7\" (UniqueName: \"kubernetes.io/projected/b5adb7ca-e392-4fff-aad0-078c4b6de62e-kube-api-access-mdcq7\") pod \"certified-operators-ssllf\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.274662 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-catalog-content\") pod \"certified-operators-ssllf\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:48:58 crc kubenswrapper[4833]: E0219 12:48:58.274935 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.774921167 +0000 UTC m=+149.170440045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.274942 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-utilities\") pod \"certified-operators-ssllf\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.306195 4833 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.318749 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdcq7\" (UniqueName: \"kubernetes.io/projected/b5adb7ca-e392-4fff-aad0-078c4b6de62e-kube-api-access-mdcq7\") pod \"certified-operators-ssllf\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.376106 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.376349 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.376375 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.376402 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.376440 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-catalog-content\") pod \"community-operators-4nnkn\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.376479 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqs2c\" (UniqueName: \"kubernetes.io/projected/906174c8-210a-4ee1-b18f-76dc4076ed5e-kube-api-access-sqs2c\") pod \"community-operators-4nnkn\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.376513 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-utilities\") pod \"community-operators-4nnkn\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.376551 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.383275 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.385829 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:58 crc kubenswrapper[4833]: E0219 12:48:58.385906 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.885891875 +0000 UTC m=+149.281410643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.386206 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.386515 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.387637 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.486796 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.486863 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-catalog-content\") pod \"community-operators-4nnkn\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.486898 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqs2c\" (UniqueName: \"kubernetes.io/projected/906174c8-210a-4ee1-b18f-76dc4076ed5e-kube-api-access-sqs2c\") pod \"community-operators-4nnkn\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.486913 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-utilities\") pod \"community-operators-4nnkn\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.487260 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-utilities\") pod \"community-operators-4nnkn\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:48:58 crc kubenswrapper[4833]: E0219 12:48:58.487471 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-19 12:48:58.987461108 +0000 UTC m=+149.382979876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lhs8n" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.487803 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-catalog-content\") pod \"community-operators-4nnkn\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.507684 4833 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T12:48:58.306221313Z","Handler":null,"Name":""} Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.526037 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqs2c\" (UniqueName: \"kubernetes.io/projected/906174c8-210a-4ee1-b18f-76dc4076ed5e-kube-api-access-sqs2c\") pod \"community-operators-4nnkn\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.531115 4833 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.531161 4833 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.545040 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.560390 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.568038 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.569569 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.569858 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zfz65"] Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.597292 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.605096 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jkz78"] Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.648342 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.700247 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.706028 4833 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.706066 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.758081 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lhs8n\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.861034 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:48:58 crc kubenswrapper[4833]: I0219 12:48:58.886081 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ssllf"] Feb 19 12:48:58 crc kubenswrapper[4833]: W0219 12:48:58.936034 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5adb7ca_e392_4fff_aad0_078c4b6de62e.slice/crio-fc34e05d78372393d5898093d7829081e9234f4acb0c95669f3f2807418545ac WatchSource:0}: Error finding container fc34e05d78372393d5898093d7829081e9234f4acb0c95669f3f2807418545ac: Status 404 returned error can't find the container with id fc34e05d78372393d5898093d7829081e9234f4acb0c95669f3f2807418545ac Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.076087 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" event={"ID":"dd392d4e-1dc0-4367-9b88-94f5c1bd2ae5","Type":"ContainerStarted","Data":"8c547afadcffeff96cdae9f1d00b91e868094d949f7e6fd50acef3883ea3dd25"} Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.079771 4833 patch_prober.go:28] interesting pod/router-default-5444994796-k9bp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 12:48:59 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 19 12:48:59 crc kubenswrapper[4833]: [+]process-running ok Feb 19 12:48:59 crc kubenswrapper[4833]: healthz check failed Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.079866 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k9bp4" podUID="88c5b880-561a-4961-91be-bc1ee9bdd96b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.093198 4833 generic.go:334] "Generic (PLEG): container finished" podID="f436432a-f92b-4b2a-89a9-8014f487dc12" containerID="b07ed14e35ba092e38f1f9d40089d8df4020b768a3e9fe5355fb6e57f7423d66" exitCode=0 Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.093275 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" event={"ID":"f436432a-f92b-4b2a-89a9-8014f487dc12","Type":"ContainerDied","Data":"b07ed14e35ba092e38f1f9d40089d8df4020b768a3e9fe5355fb6e57f7423d66"} Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.106222 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssllf" event={"ID":"b5adb7ca-e392-4fff-aad0-078c4b6de62e","Type":"ContainerStarted","Data":"fc34e05d78372393d5898093d7829081e9234f4acb0c95669f3f2807418545ac"} Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.109941 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7vxjx" podStartSLOduration=11.10992342 podStartE2EDuration="11.10992342s" podCreationTimestamp="2026-02-19 12:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:48:59.100122334 +0000 UTC m=+149.495641092" watchObservedRunningTime="2026-02-19 12:48:59.10992342 +0000 UTC m=+149.505442188" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.113232 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkz78" event={"ID":"3a9e548c-9edf-4dc6-83a7-4f07f6960721","Type":"ContainerStarted","Data":"8b9c14c5e785f2b02d27bb45daa3db4d076f62a883a135583aba2263d7819842"} Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.113269 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkz78" event={"ID":"3a9e548c-9edf-4dc6-83a7-4f07f6960721","Type":"ContainerStarted","Data":"2f394c335a45878856fc851b2325b86d841d98f7c4d2f95e30c69b12f7c89ca5"} Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.116002 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.121927 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfz65" event={"ID":"248c3a65-f82e-475e-9d61-502028f6c2cc","Type":"ContainerStarted","Data":"561730f428f1d316a01b2714dabd08ad67988dcf041a13c62de0c3c5011217be"} Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.123779 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfz65" event={"ID":"248c3a65-f82e-475e-9d61-502028f6c2cc","Type":"ContainerStarted","Data":"94607577f097d2ab6fe1bfe83bef77dce5f6709d413b0fb41ff5bf479959b0e1"} Feb 19 12:48:59 crc kubenswrapper[4833]: W0219 12:48:59.234110 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-abfd4b884047706511771ae7d0d11d2f84b42176a45a7424cc72f2de9fe303a6 WatchSource:0}: Error finding container abfd4b884047706511771ae7d0d11d2f84b42176a45a7424cc72f2de9fe303a6: Status 404 returned error can't find the container with id abfd4b884047706511771ae7d0d11d2f84b42176a45a7424cc72f2de9fe303a6 Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.350734 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhs8n"] Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.452508 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nnkn"] Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.637762 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pjrfv"] Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.639250 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.642828 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.651615 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjrfv"] Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.659591 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.660215 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.663028 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.663078 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.678476 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.716294 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75wj8\" (UniqueName: \"kubernetes.io/projected/01feede9-207a-499b-aee1-0fcde52463d6-kube-api-access-75wj8\") pod \"redhat-marketplace-pjrfv\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.716440 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-catalog-content\") pod \"redhat-marketplace-pjrfv\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.716474 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-utilities\") pod \"redhat-marketplace-pjrfv\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.817871 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75wj8\" (UniqueName: \"kubernetes.io/projected/01feede9-207a-499b-aee1-0fcde52463d6-kube-api-access-75wj8\") pod \"redhat-marketplace-pjrfv\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.817987 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b91f2de3-9661-44b6-903b-4393061ee56e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b91f2de3-9661-44b6-903b-4393061ee56e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.818061 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-catalog-content\") pod \"redhat-marketplace-pjrfv\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.818094 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b91f2de3-9661-44b6-903b-4393061ee56e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b91f2de3-9661-44b6-903b-4393061ee56e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.818123 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-utilities\") pod \"redhat-marketplace-pjrfv\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.818821 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-utilities\") pod \"redhat-marketplace-pjrfv\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.819185 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-catalog-content\") pod \"redhat-marketplace-pjrfv\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.849148 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75wj8\" (UniqueName: \"kubernetes.io/projected/01feede9-207a-499b-aee1-0fcde52463d6-kube-api-access-75wj8\") pod \"redhat-marketplace-pjrfv\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.919652 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b91f2de3-9661-44b6-903b-4393061ee56e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b91f2de3-9661-44b6-903b-4393061ee56e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.919777 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b91f2de3-9661-44b6-903b-4393061ee56e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b91f2de3-9661-44b6-903b-4393061ee56e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.920371 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b91f2de3-9661-44b6-903b-4393061ee56e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"b91f2de3-9661-44b6-903b-4393061ee56e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.941874 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b91f2de3-9661-44b6-903b-4393061ee56e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"b91f2de3-9661-44b6-903b-4393061ee56e\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.960141 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:48:59 crc kubenswrapper[4833]: I0219 12:48:59.972630 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.045398 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rbjpt"] Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.046892 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.053808 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbjpt"] Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.080974 4833 patch_prober.go:28] interesting pod/router-default-5444994796-k9bp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 12:49:00 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 19 12:49:00 crc kubenswrapper[4833]: [+]process-running ok Feb 19 12:49:00 crc kubenswrapper[4833]: healthz check failed Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.081251 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k9bp4" podUID="88c5b880-561a-4961-91be-bc1ee9bdd96b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.124038 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8xp\" (UniqueName: \"kubernetes.io/projected/950b9cae-bb19-478e-b128-83968a16e80f-kube-api-access-2s8xp\") pod \"redhat-marketplace-rbjpt\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.124108 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-utilities\") pod \"redhat-marketplace-rbjpt\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.124137 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-catalog-content\") pod \"redhat-marketplace-rbjpt\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.161682 4833 generic.go:334] "Generic (PLEG): container finished" podID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerID="cc7be0a0d3f7d80a259aefede4b9c26b6431fb28fcf763d11133d513d958b2d6" exitCode=0 Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.161920 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nnkn" event={"ID":"906174c8-210a-4ee1-b18f-76dc4076ed5e","Type":"ContainerDied","Data":"cc7be0a0d3f7d80a259aefede4b9c26b6431fb28fcf763d11133d513d958b2d6"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.161952 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nnkn" event={"ID":"906174c8-210a-4ee1-b18f-76dc4076ed5e","Type":"ContainerStarted","Data":"1188d6169ef29e22ca0f77f20573feec76adeea592dd25ce7895a144cbbdb1f9"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.165690 4833 generic.go:334] "Generic (PLEG): container finished" podID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerID="8b9c14c5e785f2b02d27bb45daa3db4d076f62a883a135583aba2263d7819842" exitCode=0 Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.165755 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkz78" event={"ID":"3a9e548c-9edf-4dc6-83a7-4f07f6960721","Type":"ContainerDied","Data":"8b9c14c5e785f2b02d27bb45daa3db4d076f62a883a135583aba2263d7819842"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.174936 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f42eb71e30242e69d64f93dbb7f64d0c523bd0703cfff2e28252d51ee8bca2cc"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.174978 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d3a3bdb0554aadb1a13be82059d82ef6dc252c21c6c62455d450f09da783e8ae"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.191774 4833 generic.go:334] "Generic (PLEG): container finished" podID="248c3a65-f82e-475e-9d61-502028f6c2cc" containerID="561730f428f1d316a01b2714dabd08ad67988dcf041a13c62de0c3c5011217be" exitCode=0 Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.191866 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfz65" event={"ID":"248c3a65-f82e-475e-9d61-502028f6c2cc","Type":"ContainerDied","Data":"561730f428f1d316a01b2714dabd08ad67988dcf041a13c62de0c3c5011217be"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.193755 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c126cb291ccd64e04723c783a6f4b18b7d076f512942846f670659041f9b7f83"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.193807 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2ac367ecaad6d75fa9b666e509a363894188fc293261a391de48d58ab6509a58"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.196127 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" event={"ID":"dfc152d3-9326-4602-8b02-c9fbc8f73199","Type":"ContainerStarted","Data":"8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.196161 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" event={"ID":"dfc152d3-9326-4602-8b02-c9fbc8f73199","Type":"ContainerStarted","Data":"ed61dbd6f4f033bf69943fdbb369db54c456f14b92831b3869e0b21732afedaa"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.196704 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.197803 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9a287b2de094fa44960d43030657ead306daf3f38c15e2db563e79b2d5652622"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.197825 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"abfd4b884047706511771ae7d0d11d2f84b42176a45a7424cc72f2de9fe303a6"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.198191 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.199695 4833 generic.go:334] "Generic (PLEG): container finished" podID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" containerID="ab1a9814b38a895b9229dfc280bf95fe31d91d7fc56faace53dbeafacf805181" exitCode=0 Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.200546 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssllf" event={"ID":"b5adb7ca-e392-4fff-aad0-078c4b6de62e","Type":"ContainerDied","Data":"ab1a9814b38a895b9229dfc280bf95fe31d91d7fc56faace53dbeafacf805181"} Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.225634 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s8xp\" (UniqueName: \"kubernetes.io/projected/950b9cae-bb19-478e-b128-83968a16e80f-kube-api-access-2s8xp\") pod \"redhat-marketplace-rbjpt\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.225686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-utilities\") pod \"redhat-marketplace-rbjpt\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.225707 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-catalog-content\") pod \"redhat-marketplace-rbjpt\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.226257 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-catalog-content\") pod \"redhat-marketplace-rbjpt\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.226726 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-utilities\") pod \"redhat-marketplace-rbjpt\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.266645 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 19 12:49:00 crc kubenswrapper[4833]: W0219 12:49:00.287246 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb91f2de3_9661_44b6_903b_4393061ee56e.slice/crio-53900b83abb8dab1780e6471e6e35bb360b915dc299946e34826fe378a6344a8 WatchSource:0}: Error finding container 53900b83abb8dab1780e6471e6e35bb360b915dc299946e34826fe378a6344a8: Status 404 returned error can't find the container with id 53900b83abb8dab1780e6471e6e35bb360b915dc299946e34826fe378a6344a8 Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.293731 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s8xp\" (UniqueName: \"kubernetes.io/projected/950b9cae-bb19-478e-b128-83968a16e80f-kube-api-access-2s8xp\") pod \"redhat-marketplace-rbjpt\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.320753 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" podStartSLOduration=129.320734847 podStartE2EDuration="2m9.320734847s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:49:00.292907168 +0000 UTC m=+150.688425936" watchObservedRunningTime="2026-02-19 12:49:00.320734847 +0000 UTC m=+150.716253615" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.374057 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.379652 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.456556 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjrfv"] Feb 19 12:49:00 crc kubenswrapper[4833]: W0219 12:49:00.518025 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01feede9_207a_499b_aee1_0fcde52463d6.slice/crio-0f2daf397ded588088d0799135730cd745cb60af0159f84dabb2e0714734248e WatchSource:0}: Error finding container 0f2daf397ded588088d0799135730cd745cb60af0159f84dabb2e0714734248e: Status 404 returned error can't find the container with id 0f2daf397ded588088d0799135730cd745cb60af0159f84dabb2e0714734248e Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.583040 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.633204 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f436432a-f92b-4b2a-89a9-8014f487dc12-config-volume\") pod \"f436432a-f92b-4b2a-89a9-8014f487dc12\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.633242 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f436432a-f92b-4b2a-89a9-8014f487dc12-secret-volume\") pod \"f436432a-f92b-4b2a-89a9-8014f487dc12\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.633276 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz44h\" (UniqueName: \"kubernetes.io/projected/f436432a-f92b-4b2a-89a9-8014f487dc12-kube-api-access-mz44h\") pod \"f436432a-f92b-4b2a-89a9-8014f487dc12\" (UID: \"f436432a-f92b-4b2a-89a9-8014f487dc12\") " Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.634005 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f436432a-f92b-4b2a-89a9-8014f487dc12-config-volume" (OuterVolumeSpecName: "config-volume") pod "f436432a-f92b-4b2a-89a9-8014f487dc12" (UID: "f436432a-f92b-4b2a-89a9-8014f487dc12"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.635684 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbjpt"] Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.638910 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f436432a-f92b-4b2a-89a9-8014f487dc12-kube-api-access-mz44h" (OuterVolumeSpecName: "kube-api-access-mz44h") pod "f436432a-f92b-4b2a-89a9-8014f487dc12" (UID: "f436432a-f92b-4b2a-89a9-8014f487dc12"). InnerVolumeSpecName "kube-api-access-mz44h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.639536 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f436432a-f92b-4b2a-89a9-8014f487dc12-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f436432a-f92b-4b2a-89a9-8014f487dc12" (UID: "f436432a-f92b-4b2a-89a9-8014f487dc12"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:49:00 crc kubenswrapper[4833]: W0219 12:49:00.655478 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod950b9cae_bb19_478e_b128_83968a16e80f.slice/crio-11e508ec00743fa5234479229be4dd522e0c92345c925b6b5dc4048c87e4cfb8 WatchSource:0}: Error finding container 11e508ec00743fa5234479229be4dd522e0c92345c925b6b5dc4048c87e4cfb8: Status 404 returned error can't find the container with id 11e508ec00743fa5234479229be4dd522e0c92345c925b6b5dc4048c87e4cfb8 Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.734950 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f436432a-f92b-4b2a-89a9-8014f487dc12-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.734976 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f436432a-f92b-4b2a-89a9-8014f487dc12-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:00 crc kubenswrapper[4833]: I0219 12:49:00.734988 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz44h\" (UniqueName: \"kubernetes.io/projected/f436432a-f92b-4b2a-89a9-8014f487dc12-kube-api-access-mz44h\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.041164 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z7wn9"] Feb 19 12:49:01 crc kubenswrapper[4833]: E0219 12:49:01.041432 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f436432a-f92b-4b2a-89a9-8014f487dc12" containerName="collect-profiles" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.041445 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f436432a-f92b-4b2a-89a9-8014f487dc12" containerName="collect-profiles" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.041593 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f436432a-f92b-4b2a-89a9-8014f487dc12" containerName="collect-profiles" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.042828 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.045148 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.063625 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z7wn9"] Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.079375 4833 patch_prober.go:28] interesting pod/router-default-5444994796-k9bp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 12:49:01 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 19 12:49:01 crc kubenswrapper[4833]: [+]process-running ok Feb 19 12:49:01 crc kubenswrapper[4833]: healthz check failed Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.079452 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k9bp4" podUID="88c5b880-561a-4961-91be-bc1ee9bdd96b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.139322 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-utilities\") pod \"redhat-operators-z7wn9\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.139416 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58b7q\" (UniqueName: \"kubernetes.io/projected/1011b353-4bd1-4087-b510-22d34e72e48b-kube-api-access-58b7q\") pod \"redhat-operators-z7wn9\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.139448 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-catalog-content\") pod \"redhat-operators-z7wn9\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.164012 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.177164 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cm4vr" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.209984 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" event={"ID":"f436432a-f92b-4b2a-89a9-8014f487dc12","Type":"ContainerDied","Data":"25fcd9a755e8483605aac5d466a586546e35a9e18ad756af0296055e187486bf"} Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.210024 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25fcd9a755e8483605aac5d466a586546e35a9e18ad756af0296055e187486bf" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.210112 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.213631 4833 generic.go:334] "Generic (PLEG): container finished" podID="950b9cae-bb19-478e-b128-83968a16e80f" containerID="894af90146483a1c6dda22bea58f23f77338025369b1806361bd3c0730975240" exitCode=0 Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.213714 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbjpt" event={"ID":"950b9cae-bb19-478e-b128-83968a16e80f","Type":"ContainerDied","Data":"894af90146483a1c6dda22bea58f23f77338025369b1806361bd3c0730975240"} Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.213752 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbjpt" event={"ID":"950b9cae-bb19-478e-b128-83968a16e80f","Type":"ContainerStarted","Data":"11e508ec00743fa5234479229be4dd522e0c92345c925b6b5dc4048c87e4cfb8"} Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.240054 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58b7q\" (UniqueName: \"kubernetes.io/projected/1011b353-4bd1-4087-b510-22d34e72e48b-kube-api-access-58b7q\") pod \"redhat-operators-z7wn9\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.240129 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-catalog-content\") pod \"redhat-operators-z7wn9\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.240162 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-utilities\") pod \"redhat-operators-z7wn9\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.241788 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-catalog-content\") pod \"redhat-operators-z7wn9\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.246900 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-utilities\") pod \"redhat-operators-z7wn9\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.249638 4833 generic.go:334] "Generic (PLEG): container finished" podID="01feede9-207a-499b-aee1-0fcde52463d6" containerID="ff190f086e4f25b44277a860e4123b6dad5931284f107ebc03452500804ea7fa" exitCode=0 Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.249710 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjrfv" event={"ID":"01feede9-207a-499b-aee1-0fcde52463d6","Type":"ContainerDied","Data":"ff190f086e4f25b44277a860e4123b6dad5931284f107ebc03452500804ea7fa"} Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.249738 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjrfv" event={"ID":"01feede9-207a-499b-aee1-0fcde52463d6","Type":"ContainerStarted","Data":"0f2daf397ded588088d0799135730cd745cb60af0159f84dabb2e0714734248e"} Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.258343 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b91f2de3-9661-44b6-903b-4393061ee56e","Type":"ContainerStarted","Data":"8ac56d9bd8e9a54eb1da913839054a2db30b7b11cd492c0d98b7025018db3d64"} Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.258381 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b91f2de3-9661-44b6-903b-4393061ee56e","Type":"ContainerStarted","Data":"53900b83abb8dab1780e6471e6e35bb360b915dc299946e34826fe378a6344a8"} Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.282393 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.282378251 podStartE2EDuration="2.282378251s" podCreationTimestamp="2026-02-19 12:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:49:01.277600261 +0000 UTC m=+151.673119029" watchObservedRunningTime="2026-02-19 12:49:01.282378251 +0000 UTC m=+151.677897019" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.286104 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58b7q\" (UniqueName: \"kubernetes.io/projected/1011b353-4bd1-4087-b510-22d34e72e48b-kube-api-access-58b7q\") pod \"redhat-operators-z7wn9\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.321710 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwh5m container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.321960 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hwh5m" podUID="7792e427-5573-4b37-858e-3c40b4f37505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.322099 4833 patch_prober.go:28] interesting pod/downloads-7954f5f757-hwh5m container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.322162 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hwh5m" podUID="7792e427-5573-4b37-858e-3c40b4f37505" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.378310 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.426884 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.426921 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.441044 4833 patch_prober.go:28] interesting pod/console-f9d7485db-zjv88 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.441094 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zjv88" podUID="8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.520977 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g62n8"] Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.525103 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g62n8"] Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.525200 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.547708 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-utilities\") pod \"redhat-operators-g62n8\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.547746 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw6fc\" (UniqueName: \"kubernetes.io/projected/e07089cf-96f1-4054-89e1-19ab49960371-kube-api-access-pw6fc\") pod \"redhat-operators-g62n8\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.547784 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-catalog-content\") pod \"redhat-operators-g62n8\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.648753 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-utilities\") pod \"redhat-operators-g62n8\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.648794 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw6fc\" (UniqueName: \"kubernetes.io/projected/e07089cf-96f1-4054-89e1-19ab49960371-kube-api-access-pw6fc\") pod \"redhat-operators-g62n8\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.648856 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-catalog-content\") pod \"redhat-operators-g62n8\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.659100 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-utilities\") pod \"redhat-operators-g62n8\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.659180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-catalog-content\") pod \"redhat-operators-g62n8\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.675088 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw6fc\" (UniqueName: \"kubernetes.io/projected/e07089cf-96f1-4054-89e1-19ab49960371-kube-api-access-pw6fc\") pod \"redhat-operators-g62n8\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.849983 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:01 crc kubenswrapper[4833]: I0219 12:49:01.928169 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z7wn9"] Feb 19 12:49:01 crc kubenswrapper[4833]: W0219 12:49:01.943978 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1011b353_4bd1_4087_b510_22d34e72e48b.slice/crio-681497fc38f4d09434a20e7533c8c2c68909e4fafbf800c930da4ebe75a1e9f3 WatchSource:0}: Error finding container 681497fc38f4d09434a20e7533c8c2c68909e4fafbf800c930da4ebe75a1e9f3: Status 404 returned error can't find the container with id 681497fc38f4d09434a20e7533c8c2c68909e4fafbf800c930da4ebe75a1e9f3 Feb 19 12:49:02 crc kubenswrapper[4833]: I0219 12:49:02.065641 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:49:02 crc kubenswrapper[4833]: I0219 12:49:02.076979 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:49:02 crc kubenswrapper[4833]: I0219 12:49:02.082283 4833 patch_prober.go:28] interesting pod/router-default-5444994796-k9bp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 12:49:02 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 19 12:49:02 crc kubenswrapper[4833]: [+]process-running ok Feb 19 12:49:02 crc kubenswrapper[4833]: healthz check failed Feb 19 12:49:02 crc kubenswrapper[4833]: I0219 12:49:02.082339 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k9bp4" podUID="88c5b880-561a-4961-91be-bc1ee9bdd96b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 12:49:02 crc kubenswrapper[4833]: I0219 12:49:02.274604 4833 generic.go:334] "Generic (PLEG): container finished" podID="b91f2de3-9661-44b6-903b-4393061ee56e" containerID="8ac56d9bd8e9a54eb1da913839054a2db30b7b11cd492c0d98b7025018db3d64" exitCode=0 Feb 19 12:49:02 crc kubenswrapper[4833]: I0219 12:49:02.274693 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b91f2de3-9661-44b6-903b-4393061ee56e","Type":"ContainerDied","Data":"8ac56d9bd8e9a54eb1da913839054a2db30b7b11cd492c0d98b7025018db3d64"} Feb 19 12:49:02 crc kubenswrapper[4833]: I0219 12:49:02.276426 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7wn9" event={"ID":"1011b353-4bd1-4087-b510-22d34e72e48b","Type":"ContainerStarted","Data":"681497fc38f4d09434a20e7533c8c2c68909e4fafbf800c930da4ebe75a1e9f3"} Feb 19 12:49:02 crc kubenswrapper[4833]: I0219 12:49:02.385247 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g62n8"] Feb 19 12:49:02 crc kubenswrapper[4833]: W0219 12:49:02.425777 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode07089cf_96f1_4054_89e1_19ab49960371.slice/crio-957a65540dcf1ec43670394c83f71fe3d13ccd34339e524fbfa5a23482c7892c WatchSource:0}: Error finding container 957a65540dcf1ec43670394c83f71fe3d13ccd34339e524fbfa5a23482c7892c: Status 404 returned error can't find the container with id 957a65540dcf1ec43670394c83f71fe3d13ccd34339e524fbfa5a23482c7892c Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.080155 4833 patch_prober.go:28] interesting pod/router-default-5444994796-k9bp4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 12:49:03 crc kubenswrapper[4833]: [-]has-synced failed: reason withheld Feb 19 12:49:03 crc kubenswrapper[4833]: [+]process-running ok Feb 19 12:49:03 crc kubenswrapper[4833]: healthz check failed Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.080223 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-k9bp4" podUID="88c5b880-561a-4961-91be-bc1ee9bdd96b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.302347 4833 generic.go:334] "Generic (PLEG): container finished" podID="1011b353-4bd1-4087-b510-22d34e72e48b" containerID="2915a40c08252e09695fe8c7122c0e99e664bb4959bddef813e113105767af47" exitCode=0 Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.302427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7wn9" event={"ID":"1011b353-4bd1-4087-b510-22d34e72e48b","Type":"ContainerDied","Data":"2915a40c08252e09695fe8c7122c0e99e664bb4959bddef813e113105767af47"} Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.315086 4833 generic.go:334] "Generic (PLEG): container finished" podID="e07089cf-96f1-4054-89e1-19ab49960371" containerID="5d699212959813ccd4bbbe64f43fd2bba63f188f3f802f44c16ca57cdca1dc57" exitCode=0 Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.315902 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62n8" event={"ID":"e07089cf-96f1-4054-89e1-19ab49960371","Type":"ContainerDied","Data":"5d699212959813ccd4bbbe64f43fd2bba63f188f3f802f44c16ca57cdca1dc57"} Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.315933 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62n8" event={"ID":"e07089cf-96f1-4054-89e1-19ab49960371","Type":"ContainerStarted","Data":"957a65540dcf1ec43670394c83f71fe3d13ccd34339e524fbfa5a23482c7892c"} Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.545738 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.590400 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b91f2de3-9661-44b6-903b-4393061ee56e-kubelet-dir\") pod \"b91f2de3-9661-44b6-903b-4393061ee56e\" (UID: \"b91f2de3-9661-44b6-903b-4393061ee56e\") " Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.590818 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b91f2de3-9661-44b6-903b-4393061ee56e-kube-api-access\") pod \"b91f2de3-9661-44b6-903b-4393061ee56e\" (UID: \"b91f2de3-9661-44b6-903b-4393061ee56e\") " Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.591052 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b91f2de3-9661-44b6-903b-4393061ee56e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b91f2de3-9661-44b6-903b-4393061ee56e" (UID: "b91f2de3-9661-44b6-903b-4393061ee56e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.591543 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b91f2de3-9661-44b6-903b-4393061ee56e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.607960 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b91f2de3-9661-44b6-903b-4393061ee56e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b91f2de3-9661-44b6-903b-4393061ee56e" (UID: "b91f2de3-9661-44b6-903b-4393061ee56e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:03 crc kubenswrapper[4833]: I0219 12:49:03.693068 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b91f2de3-9661-44b6-903b-4393061ee56e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.079565 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.083276 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-k9bp4" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.369814 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.371699 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"b91f2de3-9661-44b6-903b-4393061ee56e","Type":"ContainerDied","Data":"53900b83abb8dab1780e6471e6e35bb360b915dc299946e34826fe378a6344a8"} Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.371724 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53900b83abb8dab1780e6471e6e35bb360b915dc299946e34826fe378a6344a8" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.571002 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 12:49:04 crc kubenswrapper[4833]: E0219 12:49:04.571280 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91f2de3-9661-44b6-903b-4393061ee56e" containerName="pruner" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.571291 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91f2de3-9661-44b6-903b-4393061ee56e" containerName="pruner" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.571385 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91f2de3-9661-44b6-903b-4393061ee56e" containerName="pruner" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.571789 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.574191 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.574277 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.592840 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.607889 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.608003 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.709384 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.709429 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.709836 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.781451 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 12:49:04 crc kubenswrapper[4833]: I0219 12:49:04.895573 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 12:49:05 crc kubenswrapper[4833]: I0219 12:49:05.310006 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 19 12:49:05 crc kubenswrapper[4833]: W0219 12:49:05.343241 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf5c3139a_5c4e_4eae_b7b9_b501a6ce5aab.slice/crio-3f1ccb40b1bde0282e6ed5b8da7151b9fb94a09c94f0a8ea76781ed8256274b8 WatchSource:0}: Error finding container 3f1ccb40b1bde0282e6ed5b8da7151b9fb94a09c94f0a8ea76781ed8256274b8: Status 404 returned error can't find the container with id 3f1ccb40b1bde0282e6ed5b8da7151b9fb94a09c94f0a8ea76781ed8256274b8 Feb 19 12:49:05 crc kubenswrapper[4833]: I0219 12:49:05.377543 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab","Type":"ContainerStarted","Data":"3f1ccb40b1bde0282e6ed5b8da7151b9fb94a09c94f0a8ea76781ed8256274b8"} Feb 19 12:49:06 crc kubenswrapper[4833]: I0219 12:49:06.390145 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab","Type":"ContainerStarted","Data":"cc885a539349aedd5988256d06963737a2aca1cd8bddac00f3f899018d6503c3"} Feb 19 12:49:06 crc kubenswrapper[4833]: I0219 12:49:06.929527 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-86hv6" Feb 19 12:49:07 crc kubenswrapper[4833]: I0219 12:49:07.398572 4833 generic.go:334] "Generic (PLEG): container finished" podID="f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab" containerID="cc885a539349aedd5988256d06963737a2aca1cd8bddac00f3f899018d6503c3" exitCode=0 Feb 19 12:49:07 crc kubenswrapper[4833]: I0219 12:49:07.398617 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab","Type":"ContainerDied","Data":"cc885a539349aedd5988256d06963737a2aca1cd8bddac00f3f899018d6503c3"} Feb 19 12:49:10 crc kubenswrapper[4833]: I0219 12:49:10.656192 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 12:49:10 crc kubenswrapper[4833]: I0219 12:49:10.692793 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kubelet-dir\") pod \"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab\" (UID: \"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab\") " Feb 19 12:49:10 crc kubenswrapper[4833]: I0219 12:49:10.692984 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kube-api-access\") pod \"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab\" (UID: \"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab\") " Feb 19 12:49:10 crc kubenswrapper[4833]: I0219 12:49:10.694661 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab" (UID: "f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:49:10 crc kubenswrapper[4833]: I0219 12:49:10.703043 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab" (UID: "f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:10 crc kubenswrapper[4833]: I0219 12:49:10.796558 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:10 crc kubenswrapper[4833]: I0219 12:49:10.796649 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:11 crc kubenswrapper[4833]: I0219 12:49:11.321555 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hwh5m" Feb 19 12:49:11 crc kubenswrapper[4833]: I0219 12:49:11.430389 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab","Type":"ContainerDied","Data":"3f1ccb40b1bde0282e6ed5b8da7151b9fb94a09c94f0a8ea76781ed8256274b8"} Feb 19 12:49:11 crc kubenswrapper[4833]: I0219 12:49:11.430428 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f1ccb40b1bde0282e6ed5b8da7151b9fb94a09c94f0a8ea76781ed8256274b8" Feb 19 12:49:11 crc kubenswrapper[4833]: I0219 12:49:11.430485 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 19 12:49:11 crc kubenswrapper[4833]: I0219 12:49:11.432929 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:49:11 crc kubenswrapper[4833]: I0219 12:49:11.439873 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:49:14 crc kubenswrapper[4833]: I0219 12:49:14.209264 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9hhkc"] Feb 19 12:49:14 crc kubenswrapper[4833]: I0219 12:49:14.209889 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" podUID="45708559-b521-4b8d-a745-12119e61a8cb" containerName="controller-manager" containerID="cri-o://dc1b0d0e120716fb2e2dd6c46be86f55d88d89a3ba5e62274a6416dab09b03ed" gracePeriod=30 Feb 19 12:49:14 crc kubenswrapper[4833]: I0219 12:49:14.215864 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9"] Feb 19 12:49:14 crc kubenswrapper[4833]: I0219 12:49:14.216129 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" podUID="17e0b921-253c-44e3-8abd-616a4c22825c" containerName="route-controller-manager" containerID="cri-o://414c8b353f848dad344c3622b546d19fcb8a6cbde0a833452ab8371321ca24f3" gracePeriod=30 Feb 19 12:49:14 crc kubenswrapper[4833]: I0219 12:49:14.344263 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:49:14 crc kubenswrapper[4833]: I0219 12:49:14.351978 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4177542e-89ba-436d-bc9d-e792f2da656c-metrics-certs\") pod \"network-metrics-daemon-clgkm\" (UID: \"4177542e-89ba-436d-bc9d-e792f2da656c\") " pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:49:14 crc kubenswrapper[4833]: I0219 12:49:14.629105 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-clgkm" Feb 19 12:49:15 crc kubenswrapper[4833]: I0219 12:49:15.458022 4833 generic.go:334] "Generic (PLEG): container finished" podID="45708559-b521-4b8d-a745-12119e61a8cb" containerID="dc1b0d0e120716fb2e2dd6c46be86f55d88d89a3ba5e62274a6416dab09b03ed" exitCode=0 Feb 19 12:49:15 crc kubenswrapper[4833]: I0219 12:49:15.458092 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" event={"ID":"45708559-b521-4b8d-a745-12119e61a8cb","Type":"ContainerDied","Data":"dc1b0d0e120716fb2e2dd6c46be86f55d88d89a3ba5e62274a6416dab09b03ed"} Feb 19 12:49:15 crc kubenswrapper[4833]: I0219 12:49:15.459932 4833 generic.go:334] "Generic (PLEG): container finished" podID="17e0b921-253c-44e3-8abd-616a4c22825c" containerID="414c8b353f848dad344c3622b546d19fcb8a6cbde0a833452ab8371321ca24f3" exitCode=0 Feb 19 12:49:15 crc kubenswrapper[4833]: I0219 12:49:15.460067 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" event={"ID":"17e0b921-253c-44e3-8abd-616a4c22825c","Type":"ContainerDied","Data":"414c8b353f848dad344c3622b546d19fcb8a6cbde0a833452ab8371321ca24f3"} Feb 19 12:49:15 crc kubenswrapper[4833]: I0219 12:49:15.744655 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:49:15 crc kubenswrapper[4833]: I0219 12:49:15.744732 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:49:18 crc kubenswrapper[4833]: I0219 12:49:18.868330 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:49:21 crc kubenswrapper[4833]: I0219 12:49:21.298424 4833 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vzmp9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 19 12:49:21 crc kubenswrapper[4833]: I0219 12:49:21.298520 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" podUID="17e0b921-253c-44e3-8abd-616a4c22825c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 19 12:49:22 crc kubenswrapper[4833]: I0219 12:49:22.032885 4833 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-9hhkc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 19 12:49:22 crc kubenswrapper[4833]: I0219 12:49:22.033289 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" podUID="45708559-b521-4b8d-a745-12119e61a8cb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 19 12:49:29 crc kubenswrapper[4833]: E0219 12:49:29.207736 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 12:49:29 crc kubenswrapper[4833]: E0219 12:49:29.208389 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t9dss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zfz65_openshift-marketplace(248c3a65-f82e-475e-9d61-502028f6c2cc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 12:49:29 crc kubenswrapper[4833]: E0219 12:49:29.209736 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-zfz65" podUID="248c3a65-f82e-475e-9d61-502028f6c2cc" Feb 19 12:49:29 crc kubenswrapper[4833]: E0219 12:49:29.705257 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-zfz65" podUID="248c3a65-f82e-475e-9d61-502028f6c2cc" Feb 19 12:49:29 crc kubenswrapper[4833]: E0219 12:49:29.799267 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 19 12:49:29 crc kubenswrapper[4833]: E0219 12:49:29.799785 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ql2nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jkz78_openshift-marketplace(3a9e548c-9edf-4dc6-83a7-4f07f6960721): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 12:49:29 crc kubenswrapper[4833]: E0219 12:49:29.800944 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jkz78" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" Feb 19 12:49:29 crc kubenswrapper[4833]: E0219 12:49:29.817381 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 19 12:49:29 crc kubenswrapper[4833]: E0219 12:49:29.817556 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mdcq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ssllf_openshift-marketplace(b5adb7ca-e392-4fff-aad0-078c4b6de62e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 12:49:29 crc kubenswrapper[4833]: E0219 12:49:29.818983 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ssllf" podUID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" Feb 19 12:49:31 crc kubenswrapper[4833]: E0219 12:49:31.281407 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ssllf" podUID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" Feb 19 12:49:31 crc kubenswrapper[4833]: E0219 12:49:31.281412 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-jkz78" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.338575 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.345107 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.368846 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-794c77566f-86c7l"] Feb 19 12:49:31 crc kubenswrapper[4833]: E0219 12:49:31.369086 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab" containerName="pruner" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.369102 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab" containerName="pruner" Feb 19 12:49:31 crc kubenswrapper[4833]: E0219 12:49:31.369111 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e0b921-253c-44e3-8abd-616a4c22825c" containerName="route-controller-manager" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.369116 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e0b921-253c-44e3-8abd-616a4c22825c" containerName="route-controller-manager" Feb 19 12:49:31 crc kubenswrapper[4833]: E0219 12:49:31.369123 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45708559-b521-4b8d-a745-12119e61a8cb" containerName="controller-manager" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.369130 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="45708559-b521-4b8d-a745-12119e61a8cb" containerName="controller-manager" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.369221 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e0b921-253c-44e3-8abd-616a4c22825c" containerName="route-controller-manager" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.369230 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5c3139a-5c4e-4eae-b7b9-b501a6ce5aab" containerName="pruner" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.369240 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="45708559-b521-4b8d-a745-12119e61a8cb" containerName="controller-manager" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.370058 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: E0219 12:49:31.379343 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 12:49:31 crc kubenswrapper[4833]: E0219 12:49:31.379569 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2s8xp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rbjpt_openshift-marketplace(950b9cae-bb19-478e-b128-83968a16e80f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 12:49:31 crc kubenswrapper[4833]: E0219 12:49:31.380944 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rbjpt" podUID="950b9cae-bb19-478e-b128-83968a16e80f" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.385322 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-794c77566f-86c7l"] Feb 19 12:49:31 crc kubenswrapper[4833]: E0219 12:49:31.404590 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 19 12:49:31 crc kubenswrapper[4833]: E0219 12:49:31.404772 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75wj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pjrfv_openshift-marketplace(01feede9-207a-499b-aee1-0fcde52463d6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 19 12:49:31 crc kubenswrapper[4833]: E0219 12:49:31.406166 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pjrfv" podUID="01feede9-207a-499b-aee1-0fcde52463d6" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511208 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-proxy-ca-bundles\") pod \"45708559-b521-4b8d-a745-12119e61a8cb\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511280 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45708559-b521-4b8d-a745-12119e61a8cb-serving-cert\") pod \"45708559-b521-4b8d-a745-12119e61a8cb\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511325 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-config\") pod \"17e0b921-253c-44e3-8abd-616a4c22825c\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511356 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0b921-253c-44e3-8abd-616a4c22825c-serving-cert\") pod \"17e0b921-253c-44e3-8abd-616a4c22825c\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511390 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-config\") pod \"45708559-b521-4b8d-a745-12119e61a8cb\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511411 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-client-ca\") pod \"17e0b921-253c-44e3-8abd-616a4c22825c\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511470 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbmmc\" (UniqueName: \"kubernetes.io/projected/17e0b921-253c-44e3-8abd-616a4c22825c-kube-api-access-gbmmc\") pod \"17e0b921-253c-44e3-8abd-616a4c22825c\" (UID: \"17e0b921-253c-44e3-8abd-616a4c22825c\") " Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511521 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-client-ca\") pod \"45708559-b521-4b8d-a745-12119e61a8cb\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511549 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-877dl\" (UniqueName: \"kubernetes.io/projected/45708559-b521-4b8d-a745-12119e61a8cb-kube-api-access-877dl\") pod \"45708559-b521-4b8d-a745-12119e61a8cb\" (UID: \"45708559-b521-4b8d-a745-12119e61a8cb\") " Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511706 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-config\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511752 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-client-ca\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511814 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-proxy-ca-bundles\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511848 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgfs9\" (UniqueName: \"kubernetes.io/projected/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-kube-api-access-tgfs9\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.511873 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-serving-cert\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.512137 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "45708559-b521-4b8d-a745-12119e61a8cb" (UID: "45708559-b521-4b8d-a745-12119e61a8cb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.512410 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "45708559-b521-4b8d-a745-12119e61a8cb" (UID: "45708559-b521-4b8d-a745-12119e61a8cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.512971 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-config" (OuterVolumeSpecName: "config") pod "17e0b921-253c-44e3-8abd-616a4c22825c" (UID: "17e0b921-253c-44e3-8abd-616a4c22825c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.512994 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-config" (OuterVolumeSpecName: "config") pod "45708559-b521-4b8d-a745-12119e61a8cb" (UID: "45708559-b521-4b8d-a745-12119e61a8cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.513037 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-client-ca" (OuterVolumeSpecName: "client-ca") pod "17e0b921-253c-44e3-8abd-616a4c22825c" (UID: "17e0b921-253c-44e3-8abd-616a4c22825c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.519510 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17e0b921-253c-44e3-8abd-616a4c22825c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "17e0b921-253c-44e3-8abd-616a4c22825c" (UID: "17e0b921-253c-44e3-8abd-616a4c22825c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.521757 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45708559-b521-4b8d-a745-12119e61a8cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "45708559-b521-4b8d-a745-12119e61a8cb" (UID: "45708559-b521-4b8d-a745-12119e61a8cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.524040 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-clgkm"] Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.524147 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45708559-b521-4b8d-a745-12119e61a8cb-kube-api-access-877dl" (OuterVolumeSpecName: "kube-api-access-877dl") pod "45708559-b521-4b8d-a745-12119e61a8cb" (UID: "45708559-b521-4b8d-a745-12119e61a8cb"). InnerVolumeSpecName "kube-api-access-877dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.525524 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e0b921-253c-44e3-8abd-616a4c22825c-kube-api-access-gbmmc" (OuterVolumeSpecName: "kube-api-access-gbmmc") pod "17e0b921-253c-44e3-8abd-616a4c22825c" (UID: "17e0b921-253c-44e3-8abd-616a4c22825c"). InnerVolumeSpecName "kube-api-access-gbmmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.567342 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" event={"ID":"17e0b921-253c-44e3-8abd-616a4c22825c","Type":"ContainerDied","Data":"56831085d07e26d8902e9d8dcafe610b45d8836b5309c21566a26ef29101a9f3"} Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.567396 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.567419 4833 scope.go:117] "RemoveContainer" containerID="414c8b353f848dad344c3622b546d19fcb8a6cbde0a833452ab8371321ca24f3" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.570751 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" event={"ID":"45708559-b521-4b8d-a745-12119e61a8cb","Type":"ContainerDied","Data":"9419e0ff9d37ca16a44b41ee340e1efe76ff71bf04ff01f37318098baf953111"} Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.570792 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-9hhkc" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.601366 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9"] Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.604231 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9"] Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612453 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-proxy-ca-bundles\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612515 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-serving-cert\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612533 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgfs9\" (UniqueName: \"kubernetes.io/projected/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-kube-api-access-tgfs9\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612563 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-config\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612588 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-client-ca\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612625 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612635 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45708559-b521-4b8d-a745-12119e61a8cb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612645 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612654 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17e0b921-253c-44e3-8abd-616a4c22825c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612664 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17e0b921-253c-44e3-8abd-616a4c22825c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612672 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612681 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbmmc\" (UniqueName: \"kubernetes.io/projected/17e0b921-253c-44e3-8abd-616a4c22825c-kube-api-access-gbmmc\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612689 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45708559-b521-4b8d-a745-12119e61a8cb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.612697 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-877dl\" (UniqueName: \"kubernetes.io/projected/45708559-b521-4b8d-a745-12119e61a8cb-kube-api-access-877dl\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.613629 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-client-ca\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.615386 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-proxy-ca-bundles\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.616912 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-config\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.621487 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-serving-cert\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.628559 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9hhkc"] Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.629258 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgfs9\" (UniqueName: \"kubernetes.io/projected/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-kube-api-access-tgfs9\") pod \"controller-manager-794c77566f-86c7l\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.632119 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-9hhkc"] Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.720479 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:31 crc kubenswrapper[4833]: I0219 12:49:31.850829 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-bpx5m" Feb 19 12:49:32 crc kubenswrapper[4833]: I0219 12:49:32.298781 4833 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vzmp9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 12:49:32 crc kubenswrapper[4833]: I0219 12:49:32.298842 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vzmp9" podUID="17e0b921-253c-44e3-8abd-616a4c22825c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 19 12:49:32 crc kubenswrapper[4833]: I0219 12:49:32.324437 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e0b921-253c-44e3-8abd-616a4c22825c" path="/var/lib/kubelet/pods/17e0b921-253c-44e3-8abd-616a4c22825c/volumes" Feb 19 12:49:32 crc kubenswrapper[4833]: I0219 12:49:32.325076 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45708559-b521-4b8d-a745-12119e61a8cb" path="/var/lib/kubelet/pods/45708559-b521-4b8d-a745-12119e61a8cb/volumes" Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.783177 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn"] Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.784663 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn"] Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.784858 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.787430 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.787842 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.789329 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.789703 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.789874 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.790011 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.942663 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-config\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.942740 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-client-ca\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.942786 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rrj\" (UniqueName: \"kubernetes.io/projected/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-kube-api-access-n7rrj\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:33 crc kubenswrapper[4833]: I0219 12:49:33.942818 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-serving-cert\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:34 crc kubenswrapper[4833]: I0219 12:49:34.043848 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-config\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:34 crc kubenswrapper[4833]: I0219 12:49:34.043909 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-client-ca\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:34 crc kubenswrapper[4833]: I0219 12:49:34.043952 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7rrj\" (UniqueName: \"kubernetes.io/projected/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-kube-api-access-n7rrj\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:34 crc kubenswrapper[4833]: I0219 12:49:34.043987 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-serving-cert\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:34 crc kubenswrapper[4833]: I0219 12:49:34.045169 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-client-ca\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:34 crc kubenswrapper[4833]: I0219 12:49:34.045442 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-config\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:34 crc kubenswrapper[4833]: I0219 12:49:34.053603 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-serving-cert\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:34 crc kubenswrapper[4833]: I0219 12:49:34.061728 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7rrj\" (UniqueName: \"kubernetes.io/projected/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-kube-api-access-n7rrj\") pod \"route-controller-manager-56bfc9748-gr5gn\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:34 crc kubenswrapper[4833]: I0219 12:49:34.135374 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:34 crc kubenswrapper[4833]: I0219 12:49:34.162290 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-794c77566f-86c7l"] Feb 19 12:49:34 crc kubenswrapper[4833]: I0219 12:49:34.267575 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn"] Feb 19 12:49:35 crc kubenswrapper[4833]: E0219 12:49:35.588195 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pjrfv" podUID="01feede9-207a-499b-aee1-0fcde52463d6" Feb 19 12:49:35 crc kubenswrapper[4833]: E0219 12:49:35.588212 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rbjpt" podUID="950b9cae-bb19-478e-b128-83968a16e80f" Feb 19 12:49:35 crc kubenswrapper[4833]: I0219 12:49:35.603581 4833 scope.go:117] "RemoveContainer" containerID="dc1b0d0e120716fb2e2dd6c46be86f55d88d89a3ba5e62274a6416dab09b03ed" Feb 19 12:49:35 crc kubenswrapper[4833]: I0219 12:49:35.861112 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn"] Feb 19 12:49:35 crc kubenswrapper[4833]: W0219 12:49:35.870379 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b6f9553_525d_4a8c_b420_067a9c5ec4a6.slice/crio-f1577f81dd9eb5fbd278065fe6d4361a81bda1e48db4ecdf9f743e2abe5897d8 WatchSource:0}: Error finding container f1577f81dd9eb5fbd278065fe6d4361a81bda1e48db4ecdf9f743e2abe5897d8: Status 404 returned error can't find the container with id f1577f81dd9eb5fbd278065fe6d4361a81bda1e48db4ecdf9f743e2abe5897d8 Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.026024 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-794c77566f-86c7l"] Feb 19 12:49:36 crc kubenswrapper[4833]: W0219 12:49:36.032340 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod714902bb_2bf7_4ba3_a7c7_3e1a712d276b.slice/crio-a0ed088746996a3ea5d47389e66de66456f5e0c478cbc8502f312852d0dd0686 WatchSource:0}: Error finding container a0ed088746996a3ea5d47389e66de66456f5e0c478cbc8502f312852d0dd0686: Status 404 returned error can't find the container with id a0ed088746996a3ea5d47389e66de66456f5e0c478cbc8502f312852d0dd0686 Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.606789 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" event={"ID":"714902bb-2bf7-4ba3-a7c7-3e1a712d276b","Type":"ContainerStarted","Data":"20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683"} Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.607168 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.607181 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" event={"ID":"714902bb-2bf7-4ba3-a7c7-3e1a712d276b","Type":"ContainerStarted","Data":"a0ed088746996a3ea5d47389e66de66456f5e0c478cbc8502f312852d0dd0686"} Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.608280 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" podUID="714902bb-2bf7-4ba3-a7c7-3e1a712d276b" containerName="controller-manager" containerID="cri-o://20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683" gracePeriod=30 Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.613726 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" event={"ID":"5b6f9553-525d-4a8c-b420-067a9c5ec4a6","Type":"ContainerStarted","Data":"4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073"} Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.613767 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" event={"ID":"5b6f9553-525d-4a8c-b420-067a9c5ec4a6","Type":"ContainerStarted","Data":"f1577f81dd9eb5fbd278065fe6d4361a81bda1e48db4ecdf9f743e2abe5897d8"} Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.613879 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" podUID="5b6f9553-525d-4a8c-b420-067a9c5ec4a6" containerName="route-controller-manager" containerID="cri-o://4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073" gracePeriod=30 Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.614114 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.614595 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.619936 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-clgkm" event={"ID":"4177542e-89ba-436d-bc9d-e792f2da656c","Type":"ContainerStarted","Data":"520cae6ef2f09019802afb81429def81db80ec7666a6dd7d5d0f6e0c530c7cbe"} Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.619980 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-clgkm" event={"ID":"4177542e-89ba-436d-bc9d-e792f2da656c","Type":"ContainerStarted","Data":"51b44fe3fee08f82da4b9a8b3a4c8176d822757865dd1b0a380c3bab2c052eef"} Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.619991 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-clgkm" event={"ID":"4177542e-89ba-436d-bc9d-e792f2da656c","Type":"ContainerStarted","Data":"ae62250cee7c226012a602d453e8a284b9ab85c116b91fd18e2039f55205d209"} Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.624795 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.632187 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7wn9" event={"ID":"1011b353-4bd1-4087-b510-22d34e72e48b","Type":"ContainerStarted","Data":"4bb34b83f731e629aa16d5f339abd5b3033e7855c76ce90fbfc1767a6895270f"} Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.636599 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" podStartSLOduration=22.636582682 podStartE2EDuration="22.636582682s" podCreationTimestamp="2026-02-19 12:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:49:36.633201907 +0000 UTC m=+187.028720715" watchObservedRunningTime="2026-02-19 12:49:36.636582682 +0000 UTC m=+187.032101450" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.644225 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62n8" event={"ID":"e07089cf-96f1-4054-89e1-19ab49960371","Type":"ContainerStarted","Data":"b4465d5c7c6dbe5110f420c1061f0b6890fd947e61edbca4b416c402c3cb8c09"} Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.646672 4833 generic.go:334] "Generic (PLEG): container finished" podID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerID="93460f6915cba3bef6c58ee7ae1580dc3d8d6e946138053962a53b6e549f9b6c" exitCode=0 Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.646703 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nnkn" event={"ID":"906174c8-210a-4ee1-b18f-76dc4076ed5e","Type":"ContainerDied","Data":"93460f6915cba3bef6c58ee7ae1580dc3d8d6e946138053962a53b6e549f9b6c"} Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.666291 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" podStartSLOduration=22.666275458 podStartE2EDuration="22.666275458s" podCreationTimestamp="2026-02-19 12:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:49:36.664394141 +0000 UTC m=+187.059912909" watchObservedRunningTime="2026-02-19 12:49:36.666275458 +0000 UTC m=+187.061794226" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.678133 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-clgkm" podStartSLOduration=165.678116966 podStartE2EDuration="2m45.678116966s" podCreationTimestamp="2026-02-19 12:46:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:49:36.675384627 +0000 UTC m=+187.070903395" watchObservedRunningTime="2026-02-19 12:49:36.678116966 +0000 UTC m=+187.073635734" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.763967 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.765407 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.768322 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.768749 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.773274 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.884992 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39e1c272-3d4f-4432-b459-14bed408fac9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"39e1c272-3d4f-4432-b459-14bed408fac9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.885129 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39e1c272-3d4f-4432-b459-14bed408fac9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"39e1c272-3d4f-4432-b459-14bed408fac9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.977393 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.985517 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-serving-cert\") pod \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.985557 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgfs9\" (UniqueName: \"kubernetes.io/projected/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-kube-api-access-tgfs9\") pod \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.985621 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-client-ca\") pod \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.985656 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-config\") pod \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.985674 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-proxy-ca-bundles\") pod \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\" (UID: \"714902bb-2bf7-4ba3-a7c7-3e1a712d276b\") " Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.985761 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39e1c272-3d4f-4432-b459-14bed408fac9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"39e1c272-3d4f-4432-b459-14bed408fac9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.985802 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39e1c272-3d4f-4432-b459-14bed408fac9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"39e1c272-3d4f-4432-b459-14bed408fac9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.985875 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39e1c272-3d4f-4432-b459-14bed408fac9-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"39e1c272-3d4f-4432-b459-14bed408fac9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.986508 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-client-ca" (OuterVolumeSpecName: "client-ca") pod "714902bb-2bf7-4ba3-a7c7-3e1a712d276b" (UID: "714902bb-2bf7-4ba3-a7c7-3e1a712d276b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:36 crc kubenswrapper[4833]: I0219 12:49:36.986680 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-config" (OuterVolumeSpecName: "config") pod "714902bb-2bf7-4ba3-a7c7-3e1a712d276b" (UID: "714902bb-2bf7-4ba3-a7c7-3e1a712d276b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.000222 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-kube-api-access-tgfs9" (OuterVolumeSpecName: "kube-api-access-tgfs9") pod "714902bb-2bf7-4ba3-a7c7-3e1a712d276b" (UID: "714902bb-2bf7-4ba3-a7c7-3e1a712d276b"). InnerVolumeSpecName "kube-api-access-tgfs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.000323 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "714902bb-2bf7-4ba3-a7c7-3e1a712d276b" (UID: "714902bb-2bf7-4ba3-a7c7-3e1a712d276b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.006641 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39e1c272-3d4f-4432-b459-14bed408fac9-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"39e1c272-3d4f-4432-b459-14bed408fac9\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.007968 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "714902bb-2bf7-4ba3-a7c7-3e1a712d276b" (UID: "714902bb-2bf7-4ba3-a7c7-3e1a712d276b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.008008 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86c7df544-rrpmh"] Feb 19 12:49:37 crc kubenswrapper[4833]: E0219 12:49:37.008436 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="714902bb-2bf7-4ba3-a7c7-3e1a712d276b" containerName="controller-manager" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.008548 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="714902bb-2bf7-4ba3-a7c7-3e1a712d276b" containerName="controller-manager" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.008775 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="714902bb-2bf7-4ba3-a7c7-3e1a712d276b" containerName="controller-manager" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.009358 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.020512 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86c7df544-rrpmh"] Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.030516 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.087008 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.087048 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.087059 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.087068 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.087077 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgfs9\" (UniqueName: \"kubernetes.io/projected/714902bb-2bf7-4ba3-a7c7-3e1a712d276b-kube-api-access-tgfs9\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.187882 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-client-ca\") pod \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.187989 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-serving-cert\") pod \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.188073 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7rrj\" (UniqueName: \"kubernetes.io/projected/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-kube-api-access-n7rrj\") pod \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.188142 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-config\") pod \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\" (UID: \"5b6f9553-525d-4a8c-b420-067a9c5ec4a6\") " Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.188329 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f14784da-0c64-4486-a5d3-30021c547a00-serving-cert\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.188383 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-client-ca\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.188424 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-client-ca" (OuterVolumeSpecName: "client-ca") pod "5b6f9553-525d-4a8c-b420-067a9c5ec4a6" (UID: "5b6f9553-525d-4a8c-b420-067a9c5ec4a6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.188478 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-proxy-ca-bundles\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.188526 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-config\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.188555 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.188556 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j925f\" (UniqueName: \"kubernetes.io/projected/f14784da-0c64-4486-a5d3-30021c547a00-kube-api-access-j925f\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.188717 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.188863 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-config" (OuterVolumeSpecName: "config") pod "5b6f9553-525d-4a8c-b420-067a9c5ec4a6" (UID: "5b6f9553-525d-4a8c-b420-067a9c5ec4a6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.191779 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5b6f9553-525d-4a8c-b420-067a9c5ec4a6" (UID: "5b6f9553-525d-4a8c-b420-067a9c5ec4a6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.191952 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-kube-api-access-n7rrj" (OuterVolumeSpecName: "kube-api-access-n7rrj") pod "5b6f9553-525d-4a8c-b420-067a9c5ec4a6" (UID: "5b6f9553-525d-4a8c-b420-067a9c5ec4a6"). InnerVolumeSpecName "kube-api-access-n7rrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.292219 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-client-ca\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.292299 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-proxy-ca-bundles\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.292349 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-config\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.292383 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j925f\" (UniqueName: \"kubernetes.io/projected/f14784da-0c64-4486-a5d3-30021c547a00-kube-api-access-j925f\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.292434 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f14784da-0c64-4486-a5d3-30021c547a00-serving-cert\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.292521 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7rrj\" (UniqueName: \"kubernetes.io/projected/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-kube-api-access-n7rrj\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.292540 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.292553 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b6f9553-525d-4a8c-b420-067a9c5ec4a6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.293752 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-client-ca\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.293860 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-config\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.293872 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-proxy-ca-bundles\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.295585 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f14784da-0c64-4486-a5d3-30021c547a00-serving-cert\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.317292 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j925f\" (UniqueName: \"kubernetes.io/projected/f14784da-0c64-4486-a5d3-30021c547a00-kube-api-access-j925f\") pod \"controller-manager-86c7df544-rrpmh\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.341213 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.513345 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86c7df544-rrpmh"] Feb 19 12:49:37 crc kubenswrapper[4833]: W0219 12:49:37.520374 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf14784da_0c64_4486_a5d3_30021c547a00.slice/crio-6850da98eb5a04f255cdb19c19566ec15572c224f038507c83704dadd6304707 WatchSource:0}: Error finding container 6850da98eb5a04f255cdb19c19566ec15572c224f038507c83704dadd6304707: Status 404 returned error can't find the container with id 6850da98eb5a04f255cdb19c19566ec15572c224f038507c83704dadd6304707 Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.627688 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.654591 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"39e1c272-3d4f-4432-b459-14bed408fac9","Type":"ContainerStarted","Data":"f385f181e9113a31a8f99581aca2a48dab7cb8388b54b3d02b37e4168fa5e4fa"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.656222 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" event={"ID":"f14784da-0c64-4486-a5d3-30021c547a00","Type":"ContainerStarted","Data":"3fc4802409beed4666e63c289ff1bb90f6326ae3fd4168ea9fb3ddcd6e064b60"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.656245 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" event={"ID":"f14784da-0c64-4486-a5d3-30021c547a00","Type":"ContainerStarted","Data":"6850da98eb5a04f255cdb19c19566ec15572c224f038507c83704dadd6304707"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.657226 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.660790 4833 generic.go:334] "Generic (PLEG): container finished" podID="1011b353-4bd1-4087-b510-22d34e72e48b" containerID="4bb34b83f731e629aa16d5f339abd5b3033e7855c76ce90fbfc1767a6895270f" exitCode=0 Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.660836 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7wn9" event={"ID":"1011b353-4bd1-4087-b510-22d34e72e48b","Type":"ContainerDied","Data":"4bb34b83f731e629aa16d5f339abd5b3033e7855c76ce90fbfc1767a6895270f"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.660854 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7wn9" event={"ID":"1011b353-4bd1-4087-b510-22d34e72e48b","Type":"ContainerStarted","Data":"7a97ba92fa433c65e4027908a37bce3478be5f99c029477d2e8ea64f942700a3"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.663992 4833 patch_prober.go:28] interesting pod/controller-manager-86c7df544-rrpmh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.664024 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" podUID="f14784da-0c64-4486-a5d3-30021c547a00" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.666248 4833 generic.go:334] "Generic (PLEG): container finished" podID="e07089cf-96f1-4054-89e1-19ab49960371" containerID="b4465d5c7c6dbe5110f420c1061f0b6890fd947e61edbca4b416c402c3cb8c09" exitCode=0 Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.666297 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62n8" event={"ID":"e07089cf-96f1-4054-89e1-19ab49960371","Type":"ContainerDied","Data":"b4465d5c7c6dbe5110f420c1061f0b6890fd947e61edbca4b416c402c3cb8c09"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.666320 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62n8" event={"ID":"e07089cf-96f1-4054-89e1-19ab49960371","Type":"ContainerStarted","Data":"175687bad24c9c8700b9ca3465c227b541a1bae9dbe1e3f143fcd1ae3a492b58"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.668896 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nnkn" event={"ID":"906174c8-210a-4ee1-b18f-76dc4076ed5e","Type":"ContainerStarted","Data":"787f85ba13feeb113a85583cb50c9ed039d9d1362efd4ae935b57f2313d81938"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.671752 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" podStartSLOduration=3.671741594 podStartE2EDuration="3.671741594s" podCreationTimestamp="2026-02-19 12:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:49:37.669747394 +0000 UTC m=+188.065266162" watchObservedRunningTime="2026-02-19 12:49:37.671741594 +0000 UTC m=+188.067260362" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.678523 4833 generic.go:334] "Generic (PLEG): container finished" podID="714902bb-2bf7-4ba3-a7c7-3e1a712d276b" containerID="20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683" exitCode=0 Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.678563 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.678584 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" event={"ID":"714902bb-2bf7-4ba3-a7c7-3e1a712d276b","Type":"ContainerDied","Data":"20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.678606 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-794c77566f-86c7l" event={"ID":"714902bb-2bf7-4ba3-a7c7-3e1a712d276b","Type":"ContainerDied","Data":"a0ed088746996a3ea5d47389e66de66456f5e0c478cbc8502f312852d0dd0686"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.678623 4833 scope.go:117] "RemoveContainer" containerID="20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.686362 4833 generic.go:334] "Generic (PLEG): container finished" podID="5b6f9553-525d-4a8c-b420-067a9c5ec4a6" containerID="4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073" exitCode=0 Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.686836 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.686882 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" event={"ID":"5b6f9553-525d-4a8c-b420-067a9c5ec4a6","Type":"ContainerDied","Data":"4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.686905 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn" event={"ID":"5b6f9553-525d-4a8c-b420-067a9c5ec4a6","Type":"ContainerDied","Data":"f1577f81dd9eb5fbd278065fe6d4361a81bda1e48db4ecdf9f743e2abe5897d8"} Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.700192 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z7wn9" podStartSLOduration=2.967571098 podStartE2EDuration="36.700173768s" podCreationTimestamp="2026-02-19 12:49:01 +0000 UTC" firstStartedPulling="2026-02-19 12:49:03.310473396 +0000 UTC m=+153.705992164" lastFinishedPulling="2026-02-19 12:49:37.043076066 +0000 UTC m=+187.438594834" observedRunningTime="2026-02-19 12:49:37.698708312 +0000 UTC m=+188.094227070" watchObservedRunningTime="2026-02-19 12:49:37.700173768 +0000 UTC m=+188.095692526" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.702407 4833 scope.go:117] "RemoveContainer" containerID="20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683" Feb 19 12:49:37 crc kubenswrapper[4833]: E0219 12:49:37.703353 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683\": container with ID starting with 20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683 not found: ID does not exist" containerID="20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.703417 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683"} err="failed to get container status \"20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683\": rpc error: code = NotFound desc = could not find container \"20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683\": container with ID starting with 20fc026eaa71ebc14c93d280dc9cc8dff6ec143ede475ac62fc02ca1a396e683 not found: ID does not exist" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.703484 4833 scope.go:117] "RemoveContainer" containerID="4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.727296 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g62n8" podStartSLOduration=2.915148511 podStartE2EDuration="36.727275209s" podCreationTimestamp="2026-02-19 12:49:01 +0000 UTC" firstStartedPulling="2026-02-19 12:49:03.317749449 +0000 UTC m=+153.713268217" lastFinishedPulling="2026-02-19 12:49:37.129876147 +0000 UTC m=+187.525394915" observedRunningTime="2026-02-19 12:49:37.727183077 +0000 UTC m=+188.122701845" watchObservedRunningTime="2026-02-19 12:49:37.727275209 +0000 UTC m=+188.122793977" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.728277 4833 scope.go:117] "RemoveContainer" containerID="4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073" Feb 19 12:49:37 crc kubenswrapper[4833]: E0219 12:49:37.730813 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073\": container with ID starting with 4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073 not found: ID does not exist" containerID="4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.730846 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073"} err="failed to get container status \"4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073\": rpc error: code = NotFound desc = could not find container \"4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073\": container with ID starting with 4ab568c910e8fb364981014f065d1c594445825e1ab7a5f0b3bf3d0648763073 not found: ID does not exist" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.746439 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4nnkn" podStartSLOduration=2.80073439 podStartE2EDuration="39.746421641s" podCreationTimestamp="2026-02-19 12:48:58 +0000 UTC" firstStartedPulling="2026-02-19 12:49:00.164423729 +0000 UTC m=+150.559942497" lastFinishedPulling="2026-02-19 12:49:37.11011098 +0000 UTC m=+187.505629748" observedRunningTime="2026-02-19 12:49:37.743682622 +0000 UTC m=+188.139201400" watchObservedRunningTime="2026-02-19 12:49:37.746421641 +0000 UTC m=+188.141940409" Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.761001 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn"] Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.764244 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56bfc9748-gr5gn"] Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.775083 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-794c77566f-86c7l"] Feb 19 12:49:37 crc kubenswrapper[4833]: I0219 12:49:37.776283 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-794c77566f-86c7l"] Feb 19 12:49:38 crc kubenswrapper[4833]: I0219 12:49:38.321098 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b6f9553-525d-4a8c-b420-067a9c5ec4a6" path="/var/lib/kubelet/pods/5b6f9553-525d-4a8c-b420-067a9c5ec4a6/volumes" Feb 19 12:49:38 crc kubenswrapper[4833]: I0219 12:49:38.322029 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="714902bb-2bf7-4ba3-a7c7-3e1a712d276b" path="/var/lib/kubelet/pods/714902bb-2bf7-4ba3-a7c7-3e1a712d276b/volumes" Feb 19 12:49:38 crc kubenswrapper[4833]: I0219 12:49:38.568923 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:49:38 crc kubenswrapper[4833]: I0219 12:49:38.568960 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:49:38 crc kubenswrapper[4833]: I0219 12:49:38.573551 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 19 12:49:38 crc kubenswrapper[4833]: I0219 12:49:38.693405 4833 generic.go:334] "Generic (PLEG): container finished" podID="39e1c272-3d4f-4432-b459-14bed408fac9" containerID="3a4e2b461f1623f0b35061e53a3be60c45869bad758554babcb1f7e9be777c8f" exitCode=0 Feb 19 12:49:38 crc kubenswrapper[4833]: I0219 12:49:38.693522 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"39e1c272-3d4f-4432-b459-14bed408fac9","Type":"ContainerDied","Data":"3a4e2b461f1623f0b35061e53a3be60c45869bad758554babcb1f7e9be777c8f"} Feb 19 12:49:38 crc kubenswrapper[4833]: I0219 12:49:38.699598 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.724793 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4nnkn" podUID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerName="registry-server" probeResult="failure" output=< Feb 19 12:49:39 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 19 12:49:39 crc kubenswrapper[4833]: > Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.772043 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx"] Feb 19 12:49:39 crc kubenswrapper[4833]: E0219 12:49:39.772558 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6f9553-525d-4a8c-b420-067a9c5ec4a6" containerName="route-controller-manager" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.772573 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6f9553-525d-4a8c-b420-067a9c5ec4a6" containerName="route-controller-manager" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.772676 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6f9553-525d-4a8c-b420-067a9c5ec4a6" containerName="route-controller-manager" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.773018 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.774658 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.774892 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.775629 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.775883 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.776020 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.776141 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.783874 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx"] Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.831441 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4639d90-f7ce-47ac-b302-f39fd7ab635e-serving-cert\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.831513 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wsxg\" (UniqueName: \"kubernetes.io/projected/d4639d90-f7ce-47ac-b302-f39fd7ab635e-kube-api-access-9wsxg\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.831609 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-config\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.831725 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-client-ca\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.932465 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-config\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.932577 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-client-ca\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.932633 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4639d90-f7ce-47ac-b302-f39fd7ab635e-serving-cert\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.932661 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wsxg\" (UniqueName: \"kubernetes.io/projected/d4639d90-f7ce-47ac-b302-f39fd7ab635e-kube-api-access-9wsxg\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.933726 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-client-ca\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.934063 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-config\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.941564 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4639d90-f7ce-47ac-b302-f39fd7ab635e-serving-cert\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.955335 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wsxg\" (UniqueName: \"kubernetes.io/projected/d4639d90-f7ce-47ac-b302-f39fd7ab635e-kube-api-access-9wsxg\") pod \"route-controller-manager-696dd49c7f-nhgzx\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:39 crc kubenswrapper[4833]: I0219 12:49:39.981086 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.033470 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39e1c272-3d4f-4432-b459-14bed408fac9-kubelet-dir\") pod \"39e1c272-3d4f-4432-b459-14bed408fac9\" (UID: \"39e1c272-3d4f-4432-b459-14bed408fac9\") " Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.033590 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39e1c272-3d4f-4432-b459-14bed408fac9-kube-api-access\") pod \"39e1c272-3d4f-4432-b459-14bed408fac9\" (UID: \"39e1c272-3d4f-4432-b459-14bed408fac9\") " Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.033755 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39e1c272-3d4f-4432-b459-14bed408fac9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "39e1c272-3d4f-4432-b459-14bed408fac9" (UID: "39e1c272-3d4f-4432-b459-14bed408fac9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.033906 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/39e1c272-3d4f-4432-b459-14bed408fac9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.038202 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e1c272-3d4f-4432-b459-14bed408fac9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "39e1c272-3d4f-4432-b459-14bed408fac9" (UID: "39e1c272-3d4f-4432-b459-14bed408fac9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.067686 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-66dsh"] Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.094761 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.136221 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39e1c272-3d4f-4432-b459-14bed408fac9-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.514370 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx"] Feb 19 12:49:40 crc kubenswrapper[4833]: W0219 12:49:40.526879 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4639d90_f7ce_47ac_b302_f39fd7ab635e.slice/crio-0720107a74b9bc4c3ced749c2c1f2a95b38d1f60a3e1c9eb5db33dc6ac739410 WatchSource:0}: Error finding container 0720107a74b9bc4c3ced749c2c1f2a95b38d1f60a3e1c9eb5db33dc6ac739410: Status 404 returned error can't find the container with id 0720107a74b9bc4c3ced749c2c1f2a95b38d1f60a3e1c9eb5db33dc6ac739410 Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.706711 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.706699 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"39e1c272-3d4f-4432-b459-14bed408fac9","Type":"ContainerDied","Data":"f385f181e9113a31a8f99581aca2a48dab7cb8388b54b3d02b37e4168fa5e4fa"} Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.706853 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f385f181e9113a31a8f99581aca2a48dab7cb8388b54b3d02b37e4168fa5e4fa" Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.708673 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" event={"ID":"d4639d90-f7ce-47ac-b302-f39fd7ab635e","Type":"ContainerStarted","Data":"fe67cdcbda88863a04b2f801d953a28100bb549e433151e3b55e07720aac32df"} Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.708753 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" event={"ID":"d4639d90-f7ce-47ac-b302-f39fd7ab635e","Type":"ContainerStarted","Data":"0720107a74b9bc4c3ced749c2c1f2a95b38d1f60a3e1c9eb5db33dc6ac739410"} Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.708907 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:40 crc kubenswrapper[4833]: I0219 12:49:40.723312 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" podStartSLOduration=6.7232965369999995 podStartE2EDuration="6.723296537s" podCreationTimestamp="2026-02-19 12:49:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:49:40.720993199 +0000 UTC m=+191.116511957" watchObservedRunningTime="2026-02-19 12:49:40.723296537 +0000 UTC m=+191.118815305" Feb 19 12:49:41 crc kubenswrapper[4833]: I0219 12:49:41.041845 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:41 crc kubenswrapper[4833]: I0219 12:49:41.379698 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:41 crc kubenswrapper[4833]: I0219 12:49:41.379740 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:41 crc kubenswrapper[4833]: I0219 12:49:41.851349 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:41 crc kubenswrapper[4833]: I0219 12:49:41.851388 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:42 crc kubenswrapper[4833]: I0219 12:49:42.415342 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z7wn9" podUID="1011b353-4bd1-4087-b510-22d34e72e48b" containerName="registry-server" probeResult="failure" output=< Feb 19 12:49:42 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 19 12:49:42 crc kubenswrapper[4833]: > Feb 19 12:49:42 crc kubenswrapper[4833]: I0219 12:49:42.908754 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g62n8" podUID="e07089cf-96f1-4054-89e1-19ab49960371" containerName="registry-server" probeResult="failure" output=< Feb 19 12:49:42 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 19 12:49:42 crc kubenswrapper[4833]: > Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.355701 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 12:49:43 crc kubenswrapper[4833]: E0219 12:49:43.355982 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e1c272-3d4f-4432-b459-14bed408fac9" containerName="pruner" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.356005 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e1c272-3d4f-4432-b459-14bed408fac9" containerName="pruner" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.356119 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e1c272-3d4f-4432-b459-14bed408fac9" containerName="pruner" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.356588 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.359426 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.359619 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.369245 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.377158 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-var-lock\") pod \"installer-9-crc\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.377233 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.377256 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2582174b-9c9d-465e-9f88-e249c815e8a0-kube-api-access\") pod \"installer-9-crc\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.478239 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-var-lock\") pod \"installer-9-crc\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.478291 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.478306 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2582174b-9c9d-465e-9f88-e249c815e8a0-kube-api-access\") pod \"installer-9-crc\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.478665 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-var-lock\") pod \"installer-9-crc\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.478696 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.502046 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2582174b-9c9d-465e-9f88-e249c815e8a0-kube-api-access\") pod \"installer-9-crc\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.673584 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.726145 4833 generic.go:334] "Generic (PLEG): container finished" podID="248c3a65-f82e-475e-9d61-502028f6c2cc" containerID="5c3b8d4f46414704d9b70f42828bfdad4c272bbc5a5f2953730ace986e91068c" exitCode=0 Feb 19 12:49:43 crc kubenswrapper[4833]: I0219 12:49:43.726250 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfz65" event={"ID":"248c3a65-f82e-475e-9d61-502028f6c2cc","Type":"ContainerDied","Data":"5c3b8d4f46414704d9b70f42828bfdad4c272bbc5a5f2953730ace986e91068c"} Feb 19 12:49:44 crc kubenswrapper[4833]: I0219 12:49:44.070525 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 19 12:49:44 crc kubenswrapper[4833]: W0219 12:49:44.084916 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2582174b_9c9d_465e_9f88_e249c815e8a0.slice/crio-2c2f5ce60abf7695bb5c0a797fb9f2c13c9083e2654240edd2a861e7cb833934 WatchSource:0}: Error finding container 2c2f5ce60abf7695bb5c0a797fb9f2c13c9083e2654240edd2a861e7cb833934: Status 404 returned error can't find the container with id 2c2f5ce60abf7695bb5c0a797fb9f2c13c9083e2654240edd2a861e7cb833934 Feb 19 12:49:44 crc kubenswrapper[4833]: I0219 12:49:44.735512 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfz65" event={"ID":"248c3a65-f82e-475e-9d61-502028f6c2cc","Type":"ContainerStarted","Data":"ceeb2b513f06542a1952246ae8af59eadc3c6aa360913d8ac2426b1955819818"} Feb 19 12:49:44 crc kubenswrapper[4833]: I0219 12:49:44.736750 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2582174b-9c9d-465e-9f88-e249c815e8a0","Type":"ContainerStarted","Data":"0cf1232ab313ae99d4d8ab61b318baa4f6a01a638943d3aaed1642931dafc036"} Feb 19 12:49:44 crc kubenswrapper[4833]: I0219 12:49:44.736806 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2582174b-9c9d-465e-9f88-e249c815e8a0","Type":"ContainerStarted","Data":"2c2f5ce60abf7695bb5c0a797fb9f2c13c9083e2654240edd2a861e7cb833934"} Feb 19 12:49:44 crc kubenswrapper[4833]: I0219 12:49:44.755897 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zfz65" podStartSLOduration=2.598626141 podStartE2EDuration="47.755880463s" podCreationTimestamp="2026-02-19 12:48:57 +0000 UTC" firstStartedPulling="2026-02-19 12:48:59.123669095 +0000 UTC m=+149.519187863" lastFinishedPulling="2026-02-19 12:49:44.280923417 +0000 UTC m=+194.676442185" observedRunningTime="2026-02-19 12:49:44.750240701 +0000 UTC m=+195.145759479" watchObservedRunningTime="2026-02-19 12:49:44.755880463 +0000 UTC m=+195.151399241" Feb 19 12:49:44 crc kubenswrapper[4833]: I0219 12:49:44.766337 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.766314195 podStartE2EDuration="1.766314195s" podCreationTimestamp="2026-02-19 12:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:49:44.763292279 +0000 UTC m=+195.158811057" watchObservedRunningTime="2026-02-19 12:49:44.766314195 +0000 UTC m=+195.161832973" Feb 19 12:49:45 crc kubenswrapper[4833]: I0219 12:49:45.744331 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkz78" event={"ID":"3a9e548c-9edf-4dc6-83a7-4f07f6960721","Type":"ContainerStarted","Data":"68dfde320717f266c58d904a10e5d429239f7e76a464c1b367eae8a817ad0316"} Feb 19 12:49:45 crc kubenswrapper[4833]: I0219 12:49:45.744801 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:49:45 crc kubenswrapper[4833]: I0219 12:49:45.744848 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:49:46 crc kubenswrapper[4833]: I0219 12:49:46.750314 4833 generic.go:334] "Generic (PLEG): container finished" podID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerID="68dfde320717f266c58d904a10e5d429239f7e76a464c1b367eae8a817ad0316" exitCode=0 Feb 19 12:49:46 crc kubenswrapper[4833]: I0219 12:49:46.750648 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkz78" event={"ID":"3a9e548c-9edf-4dc6-83a7-4f07f6960721","Type":"ContainerDied","Data":"68dfde320717f266c58d904a10e5d429239f7e76a464c1b367eae8a817ad0316"} Feb 19 12:49:47 crc kubenswrapper[4833]: I0219 12:49:47.758876 4833 generic.go:334] "Generic (PLEG): container finished" podID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" containerID="2388400abc4c05575b6e31174e57c854900f1d52d40f90d484735f0da692527d" exitCode=0 Feb 19 12:49:47 crc kubenswrapper[4833]: I0219 12:49:47.758965 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssllf" event={"ID":"b5adb7ca-e392-4fff-aad0-078c4b6de62e","Type":"ContainerDied","Data":"2388400abc4c05575b6e31174e57c854900f1d52d40f90d484735f0da692527d"} Feb 19 12:49:47 crc kubenswrapper[4833]: I0219 12:49:47.762314 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkz78" event={"ID":"3a9e548c-9edf-4dc6-83a7-4f07f6960721","Type":"ContainerStarted","Data":"5416347737fb82cb710e02ab246acafbe88170e54331bb37f4de5ad2aaa68356"} Feb 19 12:49:47 crc kubenswrapper[4833]: I0219 12:49:47.973020 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:49:47 crc kubenswrapper[4833]: I0219 12:49:47.973622 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:49:48 crc kubenswrapper[4833]: I0219 12:49:48.035530 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:49:48 crc kubenswrapper[4833]: I0219 12:49:48.056331 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jkz78" podStartSLOduration=2.966303488 podStartE2EDuration="51.056316084s" podCreationTimestamp="2026-02-19 12:48:57 +0000 UTC" firstStartedPulling="2026-02-19 12:48:59.115675574 +0000 UTC m=+149.511194492" lastFinishedPulling="2026-02-19 12:49:47.20568832 +0000 UTC m=+197.601207088" observedRunningTime="2026-02-19 12:49:47.793527273 +0000 UTC m=+198.189046051" watchObservedRunningTime="2026-02-19 12:49:48.056316084 +0000 UTC m=+198.451834852" Feb 19 12:49:48 crc kubenswrapper[4833]: I0219 12:49:48.169559 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:49:48 crc kubenswrapper[4833]: I0219 12:49:48.169587 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:49:48 crc kubenswrapper[4833]: I0219 12:49:48.610616 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:49:48 crc kubenswrapper[4833]: I0219 12:49:48.655792 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:49:49 crc kubenswrapper[4833]: I0219 12:49:49.208071 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jkz78" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerName="registry-server" probeResult="failure" output=< Feb 19 12:49:49 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 19 12:49:49 crc kubenswrapper[4833]: > Feb 19 12:49:49 crc kubenswrapper[4833]: I0219 12:49:49.751630 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nnkn"] Feb 19 12:49:49 crc kubenswrapper[4833]: I0219 12:49:49.773442 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4nnkn" podUID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerName="registry-server" containerID="cri-o://787f85ba13feeb113a85583cb50c9ed039d9d1362efd4ae935b57f2313d81938" gracePeriod=2 Feb 19 12:49:49 crc kubenswrapper[4833]: I0219 12:49:49.809478 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:49:50 crc kubenswrapper[4833]: I0219 12:49:50.784315 4833 generic.go:334] "Generic (PLEG): container finished" podID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerID="787f85ba13feeb113a85583cb50c9ed039d9d1362efd4ae935b57f2313d81938" exitCode=0 Feb 19 12:49:50 crc kubenswrapper[4833]: I0219 12:49:50.784359 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nnkn" event={"ID":"906174c8-210a-4ee1-b18f-76dc4076ed5e","Type":"ContainerDied","Data":"787f85ba13feeb113a85583cb50c9ed039d9d1362efd4ae935b57f2313d81938"} Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.246168 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.271485 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-utilities\") pod \"906174c8-210a-4ee1-b18f-76dc4076ed5e\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.271552 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqs2c\" (UniqueName: \"kubernetes.io/projected/906174c8-210a-4ee1-b18f-76dc4076ed5e-kube-api-access-sqs2c\") pod \"906174c8-210a-4ee1-b18f-76dc4076ed5e\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.271585 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-catalog-content\") pod \"906174c8-210a-4ee1-b18f-76dc4076ed5e\" (UID: \"906174c8-210a-4ee1-b18f-76dc4076ed5e\") " Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.272191 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-utilities" (OuterVolumeSpecName: "utilities") pod "906174c8-210a-4ee1-b18f-76dc4076ed5e" (UID: "906174c8-210a-4ee1-b18f-76dc4076ed5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.305343 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906174c8-210a-4ee1-b18f-76dc4076ed5e-kube-api-access-sqs2c" (OuterVolumeSpecName: "kube-api-access-sqs2c") pod "906174c8-210a-4ee1-b18f-76dc4076ed5e" (UID: "906174c8-210a-4ee1-b18f-76dc4076ed5e"). InnerVolumeSpecName "kube-api-access-sqs2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.316958 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "906174c8-210a-4ee1-b18f-76dc4076ed5e" (UID: "906174c8-210a-4ee1-b18f-76dc4076ed5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.372327 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqs2c\" (UniqueName: \"kubernetes.io/projected/906174c8-210a-4ee1-b18f-76dc4076ed5e-kube-api-access-sqs2c\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.372393 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.372404 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/906174c8-210a-4ee1-b18f-76dc4076ed5e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.419788 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.460617 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.793854 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nnkn" Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.794642 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nnkn" event={"ID":"906174c8-210a-4ee1-b18f-76dc4076ed5e","Type":"ContainerDied","Data":"1188d6169ef29e22ca0f77f20573feec76adeea592dd25ce7895a144cbbdb1f9"} Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.794691 4833 scope.go:117] "RemoveContainer" containerID="787f85ba13feeb113a85583cb50c9ed039d9d1362efd4ae935b57f2313d81938" Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.835570 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nnkn"] Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.841580 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4nnkn"] Feb 19 12:49:51 crc kubenswrapper[4833]: I0219 12:49:51.917256 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:52 crc kubenswrapper[4833]: I0219 12:49:52.005103 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:52 crc kubenswrapper[4833]: I0219 12:49:52.322467 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906174c8-210a-4ee1-b18f-76dc4076ed5e" path="/var/lib/kubelet/pods/906174c8-210a-4ee1-b18f-76dc4076ed5e/volumes" Feb 19 12:49:52 crc kubenswrapper[4833]: I0219 12:49:52.421197 4833 scope.go:117] "RemoveContainer" containerID="93460f6915cba3bef6c58ee7ae1580dc3d8d6e946138053962a53b6e549f9b6c" Feb 19 12:49:53 crc kubenswrapper[4833]: I0219 12:49:53.012039 4833 scope.go:117] "RemoveContainer" containerID="cc7be0a0d3f7d80a259aefede4b9c26b6431fb28fcf763d11133d513d958b2d6" Feb 19 12:49:53 crc kubenswrapper[4833]: I0219 12:49:53.811698 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssllf" event={"ID":"b5adb7ca-e392-4fff-aad0-078c4b6de62e","Type":"ContainerStarted","Data":"15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6"} Feb 19 12:49:54 crc kubenswrapper[4833]: I0219 12:49:54.164627 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86c7df544-rrpmh"] Feb 19 12:49:54 crc kubenswrapper[4833]: I0219 12:49:54.164914 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" podUID="f14784da-0c64-4486-a5d3-30021c547a00" containerName="controller-manager" containerID="cri-o://3fc4802409beed4666e63c289ff1bb90f6326ae3fd4168ea9fb3ddcd6e064b60" gracePeriod=30 Feb 19 12:49:54 crc kubenswrapper[4833]: I0219 12:49:54.189677 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx"] Feb 19 12:49:54 crc kubenswrapper[4833]: I0219 12:49:54.190045 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" podUID="d4639d90-f7ce-47ac-b302-f39fd7ab635e" containerName="route-controller-manager" containerID="cri-o://fe67cdcbda88863a04b2f801d953a28100bb549e433151e3b55e07720aac32df" gracePeriod=30 Feb 19 12:49:54 crc kubenswrapper[4833]: I0219 12:49:54.350893 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g62n8"] Feb 19 12:49:54 crc kubenswrapper[4833]: I0219 12:49:54.351409 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g62n8" podUID="e07089cf-96f1-4054-89e1-19ab49960371" containerName="registry-server" containerID="cri-o://175687bad24c9c8700b9ca3465c227b541a1bae9dbe1e3f143fcd1ae3a492b58" gracePeriod=2 Feb 19 12:49:54 crc kubenswrapper[4833]: I0219 12:49:54.836105 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ssllf" podStartSLOduration=4.025020752 podStartE2EDuration="56.836086335s" podCreationTimestamp="2026-02-19 12:48:58 +0000 UTC" firstStartedPulling="2026-02-19 12:49:00.20106481 +0000 UTC m=+150.596583578" lastFinishedPulling="2026-02-19 12:49:53.012130393 +0000 UTC m=+203.407649161" observedRunningTime="2026-02-19 12:49:54.833349706 +0000 UTC m=+205.228868474" watchObservedRunningTime="2026-02-19 12:49:54.836086335 +0000 UTC m=+205.231605103" Feb 19 12:49:55 crc kubenswrapper[4833]: I0219 12:49:55.827310 4833 generic.go:334] "Generic (PLEG): container finished" podID="e07089cf-96f1-4054-89e1-19ab49960371" containerID="175687bad24c9c8700b9ca3465c227b541a1bae9dbe1e3f143fcd1ae3a492b58" exitCode=0 Feb 19 12:49:55 crc kubenswrapper[4833]: I0219 12:49:55.827351 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62n8" event={"ID":"e07089cf-96f1-4054-89e1-19ab49960371","Type":"ContainerDied","Data":"175687bad24c9c8700b9ca3465c227b541a1bae9dbe1e3f143fcd1ae3a492b58"} Feb 19 12:49:55 crc kubenswrapper[4833]: I0219 12:49:55.831740 4833 generic.go:334] "Generic (PLEG): container finished" podID="f14784da-0c64-4486-a5d3-30021c547a00" containerID="3fc4802409beed4666e63c289ff1bb90f6326ae3fd4168ea9fb3ddcd6e064b60" exitCode=0 Feb 19 12:49:55 crc kubenswrapper[4833]: I0219 12:49:55.831846 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" event={"ID":"f14784da-0c64-4486-a5d3-30021c547a00","Type":"ContainerDied","Data":"3fc4802409beed4666e63c289ff1bb90f6326ae3fd4168ea9fb3ddcd6e064b60"} Feb 19 12:49:55 crc kubenswrapper[4833]: I0219 12:49:55.834130 4833 generic.go:334] "Generic (PLEG): container finished" podID="d4639d90-f7ce-47ac-b302-f39fd7ab635e" containerID="fe67cdcbda88863a04b2f801d953a28100bb549e433151e3b55e07720aac32df" exitCode=0 Feb 19 12:49:55 crc kubenswrapper[4833]: I0219 12:49:55.834173 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" event={"ID":"d4639d90-f7ce-47ac-b302-f39fd7ab635e","Type":"ContainerDied","Data":"fe67cdcbda88863a04b2f801d953a28100bb549e433151e3b55e07720aac32df"} Feb 19 12:49:57 crc kubenswrapper[4833]: I0219 12:49:57.343436 4833 patch_prober.go:28] interesting pod/controller-manager-86c7df544-rrpmh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 19 12:49:57 crc kubenswrapper[4833]: I0219 12:49:57.344160 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" podUID="f14784da-0c64-4486-a5d3-30021c547a00" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.203615 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.245897 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.386947 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.386988 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.426279 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.433133 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.485378 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4639d90-f7ce-47ac-b302-f39fd7ab635e-serving-cert\") pod \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.485508 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wsxg\" (UniqueName: \"kubernetes.io/projected/d4639d90-f7ce-47ac-b302-f39fd7ab635e-kube-api-access-9wsxg\") pod \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.485564 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-config\") pod \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.485658 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-client-ca\") pod \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\" (UID: \"d4639d90-f7ce-47ac-b302-f39fd7ab635e\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.489669 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-config" (OuterVolumeSpecName: "config") pod "d4639d90-f7ce-47ac-b302-f39fd7ab635e" (UID: "d4639d90-f7ce-47ac-b302-f39fd7ab635e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.490521 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4639d90-f7ce-47ac-b302-f39fd7ab635e" (UID: "d4639d90-f7ce-47ac-b302-f39fd7ab635e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.490617 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl"] Feb 19 12:49:58 crc kubenswrapper[4833]: E0219 12:49:58.491461 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerName="registry-server" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.491478 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerName="registry-server" Feb 19 12:49:58 crc kubenswrapper[4833]: E0219 12:49:58.491527 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4639d90-f7ce-47ac-b302-f39fd7ab635e" containerName="route-controller-manager" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.491536 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4639d90-f7ce-47ac-b302-f39fd7ab635e" containerName="route-controller-manager" Feb 19 12:49:58 crc kubenswrapper[4833]: E0219 12:49:58.491574 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerName="extract-content" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.491582 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerName="extract-content" Feb 19 12:49:58 crc kubenswrapper[4833]: E0219 12:49:58.491605 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerName="extract-utilities" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.491613 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerName="extract-utilities" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.492045 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="906174c8-210a-4ee1-b18f-76dc4076ed5e" containerName="registry-server" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.492076 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4639d90-f7ce-47ac-b302-f39fd7ab635e" containerName="route-controller-manager" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.493571 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4639d90-f7ce-47ac-b302-f39fd7ab635e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4639d90-f7ce-47ac-b302-f39fd7ab635e" (UID: "d4639d90-f7ce-47ac-b302-f39fd7ab635e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.500692 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl"] Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.500802 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.505739 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4639d90-f7ce-47ac-b302-f39fd7ab635e-kube-api-access-9wsxg" (OuterVolumeSpecName: "kube-api-access-9wsxg") pod "d4639d90-f7ce-47ac-b302-f39fd7ab635e" (UID: "d4639d90-f7ce-47ac-b302-f39fd7ab635e"). InnerVolumeSpecName "kube-api-access-9wsxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.523963 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.586953 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw6fc\" (UniqueName: \"kubernetes.io/projected/e07089cf-96f1-4054-89e1-19ab49960371-kube-api-access-pw6fc\") pod \"e07089cf-96f1-4054-89e1-19ab49960371\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.587091 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-catalog-content\") pod \"e07089cf-96f1-4054-89e1-19ab49960371\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.587156 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-utilities\") pod \"e07089cf-96f1-4054-89e1-19ab49960371\" (UID: \"e07089cf-96f1-4054-89e1-19ab49960371\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.587472 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-serving-cert\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.587537 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-client-ca\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.587567 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-config\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.587612 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j4r8\" (UniqueName: \"kubernetes.io/projected/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-kube-api-access-4j4r8\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.587667 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.587681 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4639d90-f7ce-47ac-b302-f39fd7ab635e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.587692 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wsxg\" (UniqueName: \"kubernetes.io/projected/d4639d90-f7ce-47ac-b302-f39fd7ab635e-kube-api-access-9wsxg\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.587703 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4639d90-f7ce-47ac-b302-f39fd7ab635e-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.588141 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-utilities" (OuterVolumeSpecName: "utilities") pod "e07089cf-96f1-4054-89e1-19ab49960371" (UID: "e07089cf-96f1-4054-89e1-19ab49960371"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.588898 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.591183 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e07089cf-96f1-4054-89e1-19ab49960371-kube-api-access-pw6fc" (OuterVolumeSpecName: "kube-api-access-pw6fc") pod "e07089cf-96f1-4054-89e1-19ab49960371" (UID: "e07089cf-96f1-4054-89e1-19ab49960371"). InnerVolumeSpecName "kube-api-access-pw6fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.688388 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-serving-cert\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.688443 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-client-ca\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.688474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-config\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.688514 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j4r8\" (UniqueName: \"kubernetes.io/projected/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-kube-api-access-4j4r8\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.688753 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw6fc\" (UniqueName: \"kubernetes.io/projected/e07089cf-96f1-4054-89e1-19ab49960371-kube-api-access-pw6fc\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.688951 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.690306 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-client-ca\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.690726 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-config\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.693758 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-serving-cert\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.709826 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j4r8\" (UniqueName: \"kubernetes.io/projected/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-kube-api-access-4j4r8\") pod \"route-controller-manager-574568df96-8xgfl\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.715847 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e07089cf-96f1-4054-89e1-19ab49960371" (UID: "e07089cf-96f1-4054-89e1-19ab49960371"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.790069 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-proxy-ca-bundles\") pod \"f14784da-0c64-4486-a5d3-30021c547a00\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.790170 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f14784da-0c64-4486-a5d3-30021c547a00-serving-cert\") pod \"f14784da-0c64-4486-a5d3-30021c547a00\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.790194 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j925f\" (UniqueName: \"kubernetes.io/projected/f14784da-0c64-4486-a5d3-30021c547a00-kube-api-access-j925f\") pod \"f14784da-0c64-4486-a5d3-30021c547a00\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.790234 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-config\") pod \"f14784da-0c64-4486-a5d3-30021c547a00\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.790263 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-client-ca\") pod \"f14784da-0c64-4486-a5d3-30021c547a00\" (UID: \"f14784da-0c64-4486-a5d3-30021c547a00\") " Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.790609 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e07089cf-96f1-4054-89e1-19ab49960371-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.791184 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-client-ca" (OuterVolumeSpecName: "client-ca") pod "f14784da-0c64-4486-a5d3-30021c547a00" (UID: "f14784da-0c64-4486-a5d3-30021c547a00"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.791206 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f14784da-0c64-4486-a5d3-30021c547a00" (UID: "f14784da-0c64-4486-a5d3-30021c547a00"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.791580 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-config" (OuterVolumeSpecName: "config") pod "f14784da-0c64-4486-a5d3-30021c547a00" (UID: "f14784da-0c64-4486-a5d3-30021c547a00"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.793483 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f14784da-0c64-4486-a5d3-30021c547a00-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f14784da-0c64-4486-a5d3-30021c547a00" (UID: "f14784da-0c64-4486-a5d3-30021c547a00"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.795032 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f14784da-0c64-4486-a5d3-30021c547a00-kube-api-access-j925f" (OuterVolumeSpecName: "kube-api-access-j925f") pod "f14784da-0c64-4486-a5d3-30021c547a00" (UID: "f14784da-0c64-4486-a5d3-30021c547a00"). InnerVolumeSpecName "kube-api-access-j925f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.833814 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.856543 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbjpt" event={"ID":"950b9cae-bb19-478e-b128-83968a16e80f","Type":"ContainerStarted","Data":"f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7"} Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.858957 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" event={"ID":"f14784da-0c64-4486-a5d3-30021c547a00","Type":"ContainerDied","Data":"6850da98eb5a04f255cdb19c19566ec15572c224f038507c83704dadd6304707"} Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.858984 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86c7df544-rrpmh" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.859029 4833 scope.go:117] "RemoveContainer" containerID="3fc4802409beed4666e63c289ff1bb90f6326ae3fd4168ea9fb3ddcd6e064b60" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.865753 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjrfv" event={"ID":"01feede9-207a-499b-aee1-0fcde52463d6","Type":"ContainerStarted","Data":"6bff3fbd4564fe4bf6e2129ffa1843a7f4c0fb57da5e1c5f2c024d7f3d5077e7"} Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.868792 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" event={"ID":"d4639d90-f7ce-47ac-b302-f39fd7ab635e","Type":"ContainerDied","Data":"0720107a74b9bc4c3ced749c2c1f2a95b38d1f60a3e1c9eb5db33dc6ac739410"} Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.868851 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.873531 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62n8" event={"ID":"e07089cf-96f1-4054-89e1-19ab49960371","Type":"ContainerDied","Data":"957a65540dcf1ec43670394c83f71fe3d13ccd34339e524fbfa5a23482c7892c"} Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.873647 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g62n8" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.891746 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f14784da-0c64-4486-a5d3-30021c547a00-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.891810 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j925f\" (UniqueName: \"kubernetes.io/projected/f14784da-0c64-4486-a5d3-30021c547a00-kube-api-access-j925f\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.891837 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.891861 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.891884 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f14784da-0c64-4486-a5d3-30021c547a00-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.896663 4833 scope.go:117] "RemoveContainer" containerID="fe67cdcbda88863a04b2f801d953a28100bb549e433151e3b55e07720aac32df" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.923754 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx"] Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.933568 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.933675 4833 scope.go:117] "RemoveContainer" containerID="175687bad24c9c8700b9ca3465c227b541a1bae9dbe1e3f143fcd1ae3a492b58" Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.936406 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-696dd49c7f-nhgzx"] Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.940738 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86c7df544-rrpmh"] Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.946677 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86c7df544-rrpmh"] Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.948301 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g62n8"] Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.951561 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g62n8"] Feb 19 12:49:58 crc kubenswrapper[4833]: I0219 12:49:58.989025 4833 scope.go:117] "RemoveContainer" containerID="b4465d5c7c6dbe5110f420c1061f0b6890fd947e61edbca4b416c402c3cb8c09" Feb 19 12:49:59 crc kubenswrapper[4833]: I0219 12:49:59.014334 4833 scope.go:117] "RemoveContainer" containerID="5d699212959813ccd4bbbe64f43fd2bba63f188f3f802f44c16ca57cdca1dc57" Feb 19 12:49:59 crc kubenswrapper[4833]: I0219 12:49:59.337452 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl"] Feb 19 12:49:59 crc kubenswrapper[4833]: W0219 12:49:59.343160 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dae7827_aadd_4279_9f5e_7eefc2b6bb46.slice/crio-618fb42158202a98512a691b84c91e7c07bd8cccfb2ed3e636008a2ae74727ac WatchSource:0}: Error finding container 618fb42158202a98512a691b84c91e7c07bd8cccfb2ed3e636008a2ae74727ac: Status 404 returned error can't find the container with id 618fb42158202a98512a691b84c91e7c07bd8cccfb2ed3e636008a2ae74727ac Feb 19 12:49:59 crc kubenswrapper[4833]: I0219 12:49:59.890542 4833 generic.go:334] "Generic (PLEG): container finished" podID="950b9cae-bb19-478e-b128-83968a16e80f" containerID="f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7" exitCode=0 Feb 19 12:49:59 crc kubenswrapper[4833]: I0219 12:49:59.890676 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbjpt" event={"ID":"950b9cae-bb19-478e-b128-83968a16e80f","Type":"ContainerDied","Data":"f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7"} Feb 19 12:49:59 crc kubenswrapper[4833]: I0219 12:49:59.892670 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" event={"ID":"8dae7827-aadd-4279-9f5e-7eefc2b6bb46","Type":"ContainerStarted","Data":"76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada"} Feb 19 12:49:59 crc kubenswrapper[4833]: I0219 12:49:59.892725 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" event={"ID":"8dae7827-aadd-4279-9f5e-7eefc2b6bb46","Type":"ContainerStarted","Data":"618fb42158202a98512a691b84c91e7c07bd8cccfb2ed3e636008a2ae74727ac"} Feb 19 12:49:59 crc kubenswrapper[4833]: I0219 12:49:59.892941 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:49:59 crc kubenswrapper[4833]: I0219 12:49:59.897350 4833 generic.go:334] "Generic (PLEG): container finished" podID="01feede9-207a-499b-aee1-0fcde52463d6" containerID="6bff3fbd4564fe4bf6e2129ffa1843a7f4c0fb57da5e1c5f2c024d7f3d5077e7" exitCode=0 Feb 19 12:49:59 crc kubenswrapper[4833]: I0219 12:49:59.897628 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjrfv" event={"ID":"01feede9-207a-499b-aee1-0fcde52463d6","Type":"ContainerDied","Data":"6bff3fbd4564fe4bf6e2129ffa1843a7f4c0fb57da5e1c5f2c024d7f3d5077e7"} Feb 19 12:49:59 crc kubenswrapper[4833]: I0219 12:49:59.945631 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" podStartSLOduration=5.945613854 podStartE2EDuration="5.945613854s" podCreationTimestamp="2026-02-19 12:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:49:59.944937717 +0000 UTC m=+210.340456535" watchObservedRunningTime="2026-02-19 12:49:59.945613854 +0000 UTC m=+210.341132632" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.228215 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.342700 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4639d90-f7ce-47ac-b302-f39fd7ab635e" path="/var/lib/kubelet/pods/d4639d90-f7ce-47ac-b302-f39fd7ab635e/volumes" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.344439 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e07089cf-96f1-4054-89e1-19ab49960371" path="/var/lib/kubelet/pods/e07089cf-96f1-4054-89e1-19ab49960371/volumes" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.345523 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f14784da-0c64-4486-a5d3-30021c547a00" path="/var/lib/kubelet/pods/f14784da-0c64-4486-a5d3-30021c547a00/volumes" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.755196 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ssllf"] Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.788793 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5fb7794c96-65b5k"] Feb 19 12:50:00 crc kubenswrapper[4833]: E0219 12:50:00.789000 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07089cf-96f1-4054-89e1-19ab49960371" containerName="extract-utilities" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.789011 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07089cf-96f1-4054-89e1-19ab49960371" containerName="extract-utilities" Feb 19 12:50:00 crc kubenswrapper[4833]: E0219 12:50:00.789028 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07089cf-96f1-4054-89e1-19ab49960371" containerName="registry-server" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.789034 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07089cf-96f1-4054-89e1-19ab49960371" containerName="registry-server" Feb 19 12:50:00 crc kubenswrapper[4833]: E0219 12:50:00.789042 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e07089cf-96f1-4054-89e1-19ab49960371" containerName="extract-content" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.789048 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07089cf-96f1-4054-89e1-19ab49960371" containerName="extract-content" Feb 19 12:50:00 crc kubenswrapper[4833]: E0219 12:50:00.789056 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f14784da-0c64-4486-a5d3-30021c547a00" containerName="controller-manager" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.789062 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f14784da-0c64-4486-a5d3-30021c547a00" containerName="controller-manager" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.789164 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f14784da-0c64-4486-a5d3-30021c547a00" containerName="controller-manager" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.789178 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e07089cf-96f1-4054-89e1-19ab49960371" containerName="registry-server" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.789531 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.794137 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.795142 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.795611 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.796018 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.797468 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.797958 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.802620 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.811811 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fb7794c96-65b5k"] Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.906331 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbjpt" event={"ID":"950b9cae-bb19-478e-b128-83968a16e80f","Type":"ContainerStarted","Data":"9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94"} Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.909270 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjrfv" event={"ID":"01feede9-207a-499b-aee1-0fcde52463d6","Type":"ContainerStarted","Data":"11c67ecb15a56d729e586fc2a27c91452fc2511d378db23750f24c7ab8af476e"} Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.909372 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ssllf" podUID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" containerName="registry-server" containerID="cri-o://15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6" gracePeriod=2 Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.916892 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-client-ca\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.916950 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-proxy-ca-bundles\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.916982 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krhgj\" (UniqueName: \"kubernetes.io/projected/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-kube-api-access-krhgj\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.917014 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-serving-cert\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.917077 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-config\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.934882 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rbjpt" podStartSLOduration=1.878077998 podStartE2EDuration="1m0.934860317s" podCreationTimestamp="2026-02-19 12:49:00 +0000 UTC" firstStartedPulling="2026-02-19 12:49:01.21944446 +0000 UTC m=+151.614963228" lastFinishedPulling="2026-02-19 12:50:00.276226749 +0000 UTC m=+210.671745547" observedRunningTime="2026-02-19 12:50:00.930749053 +0000 UTC m=+211.326267831" watchObservedRunningTime="2026-02-19 12:50:00.934860317 +0000 UTC m=+211.330379125" Feb 19 12:50:00 crc kubenswrapper[4833]: I0219 12:50:00.956652 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pjrfv" podStartSLOduration=2.648894978 podStartE2EDuration="1m1.956629565s" podCreationTimestamp="2026-02-19 12:48:59 +0000 UTC" firstStartedPulling="2026-02-19 12:49:01.251213178 +0000 UTC m=+151.646731946" lastFinishedPulling="2026-02-19 12:50:00.558947755 +0000 UTC m=+210.954466533" observedRunningTime="2026-02-19 12:50:00.955299162 +0000 UTC m=+211.350817950" watchObservedRunningTime="2026-02-19 12:50:00.956629565 +0000 UTC m=+211.352148333" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.019924 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-config\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.020020 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-client-ca\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.020046 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-proxy-ca-bundles\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.020104 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krhgj\" (UniqueName: \"kubernetes.io/projected/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-kube-api-access-krhgj\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.020140 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-serving-cert\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.021531 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-client-ca\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.021777 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-config\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.022631 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-proxy-ca-bundles\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.026637 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-serving-cert\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.038557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krhgj\" (UniqueName: \"kubernetes.io/projected/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-kube-api-access-krhgj\") pod \"controller-manager-5fb7794c96-65b5k\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.106400 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.315692 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.357931 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fb7794c96-65b5k"] Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.425829 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-utilities\") pod \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.426432 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-catalog-content\") pod \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.426575 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdcq7\" (UniqueName: \"kubernetes.io/projected/b5adb7ca-e392-4fff-aad0-078c4b6de62e-kube-api-access-mdcq7\") pod \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\" (UID: \"b5adb7ca-e392-4fff-aad0-078c4b6de62e\") " Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.426868 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-utilities" (OuterVolumeSpecName: "utilities") pod "b5adb7ca-e392-4fff-aad0-078c4b6de62e" (UID: "b5adb7ca-e392-4fff-aad0-078c4b6de62e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.427345 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.433679 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5adb7ca-e392-4fff-aad0-078c4b6de62e-kube-api-access-mdcq7" (OuterVolumeSpecName: "kube-api-access-mdcq7") pod "b5adb7ca-e392-4fff-aad0-078c4b6de62e" (UID: "b5adb7ca-e392-4fff-aad0-078c4b6de62e"). InnerVolumeSpecName "kube-api-access-mdcq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.478596 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5adb7ca-e392-4fff-aad0-078c4b6de62e" (UID: "b5adb7ca-e392-4fff-aad0-078c4b6de62e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.528080 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdcq7\" (UniqueName: \"kubernetes.io/projected/b5adb7ca-e392-4fff-aad0-078c4b6de62e-kube-api-access-mdcq7\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.528120 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5adb7ca-e392-4fff-aad0-078c4b6de62e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.924238 4833 generic.go:334] "Generic (PLEG): container finished" podID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" containerID="15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6" exitCode=0 Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.924288 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ssllf" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.924312 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssllf" event={"ID":"b5adb7ca-e392-4fff-aad0-078c4b6de62e","Type":"ContainerDied","Data":"15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6"} Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.924339 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ssllf" event={"ID":"b5adb7ca-e392-4fff-aad0-078c4b6de62e","Type":"ContainerDied","Data":"fc34e05d78372393d5898093d7829081e9234f4acb0c95669f3f2807418545ac"} Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.924356 4833 scope.go:117] "RemoveContainer" containerID="15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.926481 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" event={"ID":"b6c7039d-02a0-46f9-8e57-37b9ee2d8188","Type":"ContainerStarted","Data":"ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1"} Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.926520 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" event={"ID":"b6c7039d-02a0-46f9-8e57-37b9ee2d8188","Type":"ContainerStarted","Data":"f81e532b189d5c17935af8e52a2d3bdf49b7bb6a431e9ff99697e114b53787ab"} Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.941777 4833 scope.go:117] "RemoveContainer" containerID="2388400abc4c05575b6e31174e57c854900f1d52d40f90d484735f0da692527d" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.946158 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" podStartSLOduration=7.946143129 podStartE2EDuration="7.946143129s" podCreationTimestamp="2026-02-19 12:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:50:01.943341617 +0000 UTC m=+212.338860385" watchObservedRunningTime="2026-02-19 12:50:01.946143129 +0000 UTC m=+212.341661897" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.959600 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ssllf"] Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.960213 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ssllf"] Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.963615 4833 scope.go:117] "RemoveContainer" containerID="ab1a9814b38a895b9229dfc280bf95fe31d91d7fc56faace53dbeafacf805181" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.979832 4833 scope.go:117] "RemoveContainer" containerID="15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6" Feb 19 12:50:01 crc kubenswrapper[4833]: E0219 12:50:01.982309 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6\": container with ID starting with 15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6 not found: ID does not exist" containerID="15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.982353 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6"} err="failed to get container status \"15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6\": rpc error: code = NotFound desc = could not find container \"15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6\": container with ID starting with 15464e91ec7b601e7a727a31b70e124db14b12389f042d9d8cdc907656989ac6 not found: ID does not exist" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.982380 4833 scope.go:117] "RemoveContainer" containerID="2388400abc4c05575b6e31174e57c854900f1d52d40f90d484735f0da692527d" Feb 19 12:50:01 crc kubenswrapper[4833]: E0219 12:50:01.982834 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2388400abc4c05575b6e31174e57c854900f1d52d40f90d484735f0da692527d\": container with ID starting with 2388400abc4c05575b6e31174e57c854900f1d52d40f90d484735f0da692527d not found: ID does not exist" containerID="2388400abc4c05575b6e31174e57c854900f1d52d40f90d484735f0da692527d" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.982880 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2388400abc4c05575b6e31174e57c854900f1d52d40f90d484735f0da692527d"} err="failed to get container status \"2388400abc4c05575b6e31174e57c854900f1d52d40f90d484735f0da692527d\": rpc error: code = NotFound desc = could not find container \"2388400abc4c05575b6e31174e57c854900f1d52d40f90d484735f0da692527d\": container with ID starting with 2388400abc4c05575b6e31174e57c854900f1d52d40f90d484735f0da692527d not found: ID does not exist" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.982904 4833 scope.go:117] "RemoveContainer" containerID="ab1a9814b38a895b9229dfc280bf95fe31d91d7fc56faace53dbeafacf805181" Feb 19 12:50:01 crc kubenswrapper[4833]: E0219 12:50:01.983182 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab1a9814b38a895b9229dfc280bf95fe31d91d7fc56faace53dbeafacf805181\": container with ID starting with ab1a9814b38a895b9229dfc280bf95fe31d91d7fc56faace53dbeafacf805181 not found: ID does not exist" containerID="ab1a9814b38a895b9229dfc280bf95fe31d91d7fc56faace53dbeafacf805181" Feb 19 12:50:01 crc kubenswrapper[4833]: I0219 12:50:01.983206 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1a9814b38a895b9229dfc280bf95fe31d91d7fc56faace53dbeafacf805181"} err="failed to get container status \"ab1a9814b38a895b9229dfc280bf95fe31d91d7fc56faace53dbeafacf805181\": rpc error: code = NotFound desc = could not find container \"ab1a9814b38a895b9229dfc280bf95fe31d91d7fc56faace53dbeafacf805181\": container with ID starting with ab1a9814b38a895b9229dfc280bf95fe31d91d7fc56faace53dbeafacf805181 not found: ID does not exist" Feb 19 12:50:02 crc kubenswrapper[4833]: I0219 12:50:02.322776 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" path="/var/lib/kubelet/pods/b5adb7ca-e392-4fff-aad0-078c4b6de62e/volumes" Feb 19 12:50:02 crc kubenswrapper[4833]: I0219 12:50:02.935273 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:02 crc kubenswrapper[4833]: I0219 12:50:02.942092 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.102309 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" podUID="6bcce72d-6a5d-42d2-b7ed-c721057061f6" containerName="oauth-openshift" containerID="cri-o://f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b" gracePeriod=15 Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.609607 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687077 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-cliconfig\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687151 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-policies\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687211 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-serving-cert\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687279 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-login\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687328 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-idp-0-file-data\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687355 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-provider-selection\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687381 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-session\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687405 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-trusted-ca-bundle\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687442 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn6zf\" (UniqueName: \"kubernetes.io/projected/6bcce72d-6a5d-42d2-b7ed-c721057061f6-kube-api-access-cn6zf\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687484 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-error\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687529 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-router-certs\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687555 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-dir\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687583 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-ocp-branding-template\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.687615 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-service-ca\") pod \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\" (UID: \"6bcce72d-6a5d-42d2-b7ed-c721057061f6\") " Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.688897 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.689258 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.689588 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.690652 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.690725 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.701904 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.703141 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bcce72d-6a5d-42d2-b7ed-c721057061f6-kube-api-access-cn6zf" (OuterVolumeSpecName: "kube-api-access-cn6zf") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "kube-api-access-cn6zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.704131 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.704472 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.704710 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.705596 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.706766 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.707004 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.707297 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6bcce72d-6a5d-42d2-b7ed-c721057061f6" (UID: "6bcce72d-6a5d-42d2-b7ed-c721057061f6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789379 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789424 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789441 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789455 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789467 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn6zf\" (UniqueName: \"kubernetes.io/projected/6bcce72d-6a5d-42d2-b7ed-c721057061f6-kube-api-access-cn6zf\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789481 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789508 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789520 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789535 4833 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789547 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789559 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789569 4833 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6bcce72d-6a5d-42d2-b7ed-c721057061f6-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789580 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.789592 4833 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6bcce72d-6a5d-42d2-b7ed-c721057061f6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.966088 4833 generic.go:334] "Generic (PLEG): container finished" podID="6bcce72d-6a5d-42d2-b7ed-c721057061f6" containerID="f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b" exitCode=0 Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.966149 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" event={"ID":"6bcce72d-6a5d-42d2-b7ed-c721057061f6","Type":"ContainerDied","Data":"f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b"} Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.966176 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" event={"ID":"6bcce72d-6a5d-42d2-b7ed-c721057061f6","Type":"ContainerDied","Data":"f95f42bcb62df035a2217d86beb9f3e33e0952a1fdfd8a7adefebdb628a7593e"} Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.966193 4833 scope.go:117] "RemoveContainer" containerID="f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.966369 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-66dsh" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.985853 4833 scope.go:117] "RemoveContainer" containerID="f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b" Feb 19 12:50:05 crc kubenswrapper[4833]: E0219 12:50:05.986196 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b\": container with ID starting with f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b not found: ID does not exist" containerID="f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.986233 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b"} err="failed to get container status \"f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b\": rpc error: code = NotFound desc = could not find container \"f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b\": container with ID starting with f18446f806dd787e55fd9d72727f893f9ea695a0439a02a3bfdd9dd898f3af2b not found: ID does not exist" Feb 19 12:50:05 crc kubenswrapper[4833]: I0219 12:50:05.996575 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-66dsh"] Feb 19 12:50:06 crc kubenswrapper[4833]: I0219 12:50:06.001235 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-66dsh"] Feb 19 12:50:06 crc kubenswrapper[4833]: I0219 12:50:06.322452 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bcce72d-6a5d-42d2-b7ed-c721057061f6" path="/var/lib/kubelet/pods/6bcce72d-6a5d-42d2-b7ed-c721057061f6/volumes" Feb 19 12:50:09 crc kubenswrapper[4833]: I0219 12:50:09.960325 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:50:09 crc kubenswrapper[4833]: I0219 12:50:09.960869 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.005615 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.062414 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.380720 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.381137 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.448933 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.802740 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7484f6b95f-4km27"] Feb 19 12:50:10 crc kubenswrapper[4833]: E0219 12:50:10.803020 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" containerName="extract-content" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.803044 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" containerName="extract-content" Feb 19 12:50:10 crc kubenswrapper[4833]: E0219 12:50:10.803068 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bcce72d-6a5d-42d2-b7ed-c721057061f6" containerName="oauth-openshift" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.803082 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bcce72d-6a5d-42d2-b7ed-c721057061f6" containerName="oauth-openshift" Feb 19 12:50:10 crc kubenswrapper[4833]: E0219 12:50:10.803094 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" containerName="registry-server" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.803105 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" containerName="registry-server" Feb 19 12:50:10 crc kubenswrapper[4833]: E0219 12:50:10.803123 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" containerName="extract-utilities" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.803134 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" containerName="extract-utilities" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.803314 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5adb7ca-e392-4fff-aad0-078c4b6de62e" containerName="registry-server" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.803334 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bcce72d-6a5d-42d2-b7ed-c721057061f6" containerName="oauth-openshift" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.803959 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.805808 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.806619 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.806909 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.807125 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.807130 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.807543 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.807595 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.807735 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.807841 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.808002 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.808715 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.809091 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.817174 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.820170 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.825645 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7484f6b95f-4km27"] Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.825899 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.888130 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-audit-policies\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.888358 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.888445 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.888570 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.888702 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-template-error\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.888794 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.888875 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.888947 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-session\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.889027 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw2jh\" (UniqueName: \"kubernetes.io/projected/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-kube-api-access-sw2jh\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.889096 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.889209 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.889253 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-audit-dir\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.889282 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.889299 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-template-login\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990532 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990606 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-audit-dir\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990635 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990657 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-template-login\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990685 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-audit-policies\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990711 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990735 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990761 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990807 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-template-error\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990838 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990868 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990888 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-session\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990908 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw2jh\" (UniqueName: \"kubernetes.io/projected/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-kube-api-access-sw2jh\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.990932 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.991847 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-audit-dir\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.992835 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-audit-policies\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.994156 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.995574 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:10 crc kubenswrapper[4833]: I0219 12:50:10.995852 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.001416 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-template-error\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.002187 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.002707 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.003472 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-session\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.003641 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.007453 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.007680 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-template-login\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.008105 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.012948 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw2jh\" (UniqueName: \"kubernetes.io/projected/2f94d3d2-d12d-4bc0-9776-ffee26c4171f-kube-api-access-sw2jh\") pod \"oauth-openshift-7484f6b95f-4km27\" (UID: \"2f94d3d2-d12d-4bc0-9776-ffee26c4171f\") " pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.065621 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.124157 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.553771 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7484f6b95f-4km27"] Feb 19 12:50:11 crc kubenswrapper[4833]: I0219 12:50:11.754158 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbjpt"] Feb 19 12:50:12 crc kubenswrapper[4833]: I0219 12:50:12.011020 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" event={"ID":"2f94d3d2-d12d-4bc0-9776-ffee26c4171f","Type":"ContainerStarted","Data":"fbcae69d25fed94618beabf2acbfc860336a00e67b0775cef4ff54e503ea792b"} Feb 19 12:50:12 crc kubenswrapper[4833]: I0219 12:50:12.011095 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" event={"ID":"2f94d3d2-d12d-4bc0-9776-ffee26c4171f","Type":"ContainerStarted","Data":"44691bacb8c158e70d8411d37ad1ba9fd500ce8fc15c1c3a138ca436f6fef406"} Feb 19 12:50:12 crc kubenswrapper[4833]: I0219 12:50:12.043603 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" podStartSLOduration=32.043568816 podStartE2EDuration="32.043568816s" podCreationTimestamp="2026-02-19 12:49:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:50:12.042150049 +0000 UTC m=+222.437668887" watchObservedRunningTime="2026-02-19 12:50:12.043568816 +0000 UTC m=+222.439087614" Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.018702 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.018698 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rbjpt" podUID="950b9cae-bb19-478e-b128-83968a16e80f" containerName="registry-server" containerID="cri-o://9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94" gracePeriod=2 Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.030655 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7484f6b95f-4km27" Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.538197 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.631598 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s8xp\" (UniqueName: \"kubernetes.io/projected/950b9cae-bb19-478e-b128-83968a16e80f-kube-api-access-2s8xp\") pod \"950b9cae-bb19-478e-b128-83968a16e80f\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.631660 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-catalog-content\") pod \"950b9cae-bb19-478e-b128-83968a16e80f\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.631721 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-utilities\") pod \"950b9cae-bb19-478e-b128-83968a16e80f\" (UID: \"950b9cae-bb19-478e-b128-83968a16e80f\") " Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.632574 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-utilities" (OuterVolumeSpecName: "utilities") pod "950b9cae-bb19-478e-b128-83968a16e80f" (UID: "950b9cae-bb19-478e-b128-83968a16e80f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.639346 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950b9cae-bb19-478e-b128-83968a16e80f-kube-api-access-2s8xp" (OuterVolumeSpecName: "kube-api-access-2s8xp") pod "950b9cae-bb19-478e-b128-83968a16e80f" (UID: "950b9cae-bb19-478e-b128-83968a16e80f"). InnerVolumeSpecName "kube-api-access-2s8xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.652157 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "950b9cae-bb19-478e-b128-83968a16e80f" (UID: "950b9cae-bb19-478e-b128-83968a16e80f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.733654 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s8xp\" (UniqueName: \"kubernetes.io/projected/950b9cae-bb19-478e-b128-83968a16e80f-kube-api-access-2s8xp\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.733702 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:13 crc kubenswrapper[4833]: I0219 12:50:13.733720 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950b9cae-bb19-478e-b128-83968a16e80f-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.027895 4833 generic.go:334] "Generic (PLEG): container finished" podID="950b9cae-bb19-478e-b128-83968a16e80f" containerID="9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94" exitCode=0 Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.027944 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbjpt" event={"ID":"950b9cae-bb19-478e-b128-83968a16e80f","Type":"ContainerDied","Data":"9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94"} Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.027988 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rbjpt" event={"ID":"950b9cae-bb19-478e-b128-83968a16e80f","Type":"ContainerDied","Data":"11e508ec00743fa5234479229be4dd522e0c92345c925b6b5dc4048c87e4cfb8"} Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.027993 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rbjpt" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.028010 4833 scope.go:117] "RemoveContainer" containerID="9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.051771 4833 scope.go:117] "RemoveContainer" containerID="f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.077055 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbjpt"] Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.083484 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rbjpt"] Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.086616 4833 scope.go:117] "RemoveContainer" containerID="894af90146483a1c6dda22bea58f23f77338025369b1806361bd3c0730975240" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.109698 4833 scope.go:117] "RemoveContainer" containerID="9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94" Feb 19 12:50:14 crc kubenswrapper[4833]: E0219 12:50:14.110240 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94\": container with ID starting with 9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94 not found: ID does not exist" containerID="9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.110290 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94"} err="failed to get container status \"9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94\": rpc error: code = NotFound desc = could not find container \"9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94\": container with ID starting with 9e7d3611d3688ce12706eeacabd662ff424e5819bfa34dd0a938a6e0855e4a94 not found: ID does not exist" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.110334 4833 scope.go:117] "RemoveContainer" containerID="f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7" Feb 19 12:50:14 crc kubenswrapper[4833]: E0219 12:50:14.111155 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7\": container with ID starting with f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7 not found: ID does not exist" containerID="f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.111479 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7"} err="failed to get container status \"f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7\": rpc error: code = NotFound desc = could not find container \"f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7\": container with ID starting with f6bd404db875b51bde39c9221e015062f2a56abffcda1a7ad153d6e3109cbde7 not found: ID does not exist" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.111678 4833 scope.go:117] "RemoveContainer" containerID="894af90146483a1c6dda22bea58f23f77338025369b1806361bd3c0730975240" Feb 19 12:50:14 crc kubenswrapper[4833]: E0219 12:50:14.112208 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"894af90146483a1c6dda22bea58f23f77338025369b1806361bd3c0730975240\": container with ID starting with 894af90146483a1c6dda22bea58f23f77338025369b1806361bd3c0730975240 not found: ID does not exist" containerID="894af90146483a1c6dda22bea58f23f77338025369b1806361bd3c0730975240" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.112242 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"894af90146483a1c6dda22bea58f23f77338025369b1806361bd3c0730975240"} err="failed to get container status \"894af90146483a1c6dda22bea58f23f77338025369b1806361bd3c0730975240\": rpc error: code = NotFound desc = could not find container \"894af90146483a1c6dda22bea58f23f77338025369b1806361bd3c0730975240\": container with ID starting with 894af90146483a1c6dda22bea58f23f77338025369b1806361bd3c0730975240 not found: ID does not exist" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.209145 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fb7794c96-65b5k"] Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.209439 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" podUID="b6c7039d-02a0-46f9-8e57-37b9ee2d8188" containerName="controller-manager" containerID="cri-o://ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1" gracePeriod=30 Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.289431 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl"] Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.289699 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" podUID="8dae7827-aadd-4279-9f5e-7eefc2b6bb46" containerName="route-controller-manager" containerID="cri-o://76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada" gracePeriod=30 Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.321461 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950b9cae-bb19-478e-b128-83968a16e80f" path="/var/lib/kubelet/pods/950b9cae-bb19-478e-b128-83968a16e80f/volumes" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.727941 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.732280 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.846914 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-serving-cert\") pod \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.847005 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-client-ca\") pod \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.847069 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-config\") pod \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.847158 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krhgj\" (UniqueName: \"kubernetes.io/projected/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-kube-api-access-krhgj\") pod \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.847201 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-config\") pod \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.847251 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-serving-cert\") pod \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.847284 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j4r8\" (UniqueName: \"kubernetes.io/projected/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-kube-api-access-4j4r8\") pod \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.847335 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-proxy-ca-bundles\") pod \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\" (UID: \"b6c7039d-02a0-46f9-8e57-37b9ee2d8188\") " Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.847376 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-client-ca\") pod \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\" (UID: \"8dae7827-aadd-4279-9f5e-7eefc2b6bb46\") " Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.847797 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-client-ca" (OuterVolumeSpecName: "client-ca") pod "b6c7039d-02a0-46f9-8e57-37b9ee2d8188" (UID: "b6c7039d-02a0-46f9-8e57-37b9ee2d8188"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.847992 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b6c7039d-02a0-46f9-8e57-37b9ee2d8188" (UID: "b6c7039d-02a0-46f9-8e57-37b9ee2d8188"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.848082 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-config" (OuterVolumeSpecName: "config") pod "b6c7039d-02a0-46f9-8e57-37b9ee2d8188" (UID: "b6c7039d-02a0-46f9-8e57-37b9ee2d8188"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.848303 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-config" (OuterVolumeSpecName: "config") pod "8dae7827-aadd-4279-9f5e-7eefc2b6bb46" (UID: "8dae7827-aadd-4279-9f5e-7eefc2b6bb46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.848362 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-client-ca" (OuterVolumeSpecName: "client-ca") pod "8dae7827-aadd-4279-9f5e-7eefc2b6bb46" (UID: "8dae7827-aadd-4279-9f5e-7eefc2b6bb46"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.851085 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8dae7827-aadd-4279-9f5e-7eefc2b6bb46" (UID: "8dae7827-aadd-4279-9f5e-7eefc2b6bb46"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.851359 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b6c7039d-02a0-46f9-8e57-37b9ee2d8188" (UID: "b6c7039d-02a0-46f9-8e57-37b9ee2d8188"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.851409 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-kube-api-access-4j4r8" (OuterVolumeSpecName: "kube-api-access-4j4r8") pod "8dae7827-aadd-4279-9f5e-7eefc2b6bb46" (UID: "8dae7827-aadd-4279-9f5e-7eefc2b6bb46"). InnerVolumeSpecName "kube-api-access-4j4r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.851487 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-kube-api-access-krhgj" (OuterVolumeSpecName: "kube-api-access-krhgj") pod "b6c7039d-02a0-46f9-8e57-37b9ee2d8188" (UID: "b6c7039d-02a0-46f9-8e57-37b9ee2d8188"). InnerVolumeSpecName "kube-api-access-krhgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.948689 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krhgj\" (UniqueName: \"kubernetes.io/projected/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-kube-api-access-krhgj\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.948722 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.948733 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.948742 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j4r8\" (UniqueName: \"kubernetes.io/projected/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-kube-api-access-4j4r8\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.948751 4833 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.948759 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.948766 4833 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.948774 4833 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b6c7039d-02a0-46f9-8e57-37b9ee2d8188-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:14 crc kubenswrapper[4833]: I0219 12:50:14.948781 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dae7827-aadd-4279-9f5e-7eefc2b6bb46-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.033443 4833 generic.go:334] "Generic (PLEG): container finished" podID="8dae7827-aadd-4279-9f5e-7eefc2b6bb46" containerID="76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada" exitCode=0 Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.033518 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" event={"ID":"8dae7827-aadd-4279-9f5e-7eefc2b6bb46","Type":"ContainerDied","Data":"76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada"} Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.033541 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" event={"ID":"8dae7827-aadd-4279-9f5e-7eefc2b6bb46","Type":"ContainerDied","Data":"618fb42158202a98512a691b84c91e7c07bd8cccfb2ed3e636008a2ae74727ac"} Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.033566 4833 scope.go:117] "RemoveContainer" containerID="76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.033648 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.036407 4833 generic.go:334] "Generic (PLEG): container finished" podID="b6c7039d-02a0-46f9-8e57-37b9ee2d8188" containerID="ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1" exitCode=0 Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.037322 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.037327 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" event={"ID":"b6c7039d-02a0-46f9-8e57-37b9ee2d8188","Type":"ContainerDied","Data":"ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1"} Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.037471 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fb7794c96-65b5k" event={"ID":"b6c7039d-02a0-46f9-8e57-37b9ee2d8188","Type":"ContainerDied","Data":"f81e532b189d5c17935af8e52a2d3bdf49b7bb6a431e9ff99697e114b53787ab"} Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.060283 4833 scope.go:117] "RemoveContainer" containerID="76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada" Feb 19 12:50:15 crc kubenswrapper[4833]: E0219 12:50:15.063862 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada\": container with ID starting with 76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada not found: ID does not exist" containerID="76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.064115 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada"} err="failed to get container status \"76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada\": rpc error: code = NotFound desc = could not find container \"76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada\": container with ID starting with 76df6e6de7bdca94a3482aec8a295303c447b17282d1d4a44f1dc555df129ada not found: ID does not exist" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.064154 4833 scope.go:117] "RemoveContainer" containerID="ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.065168 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl"] Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.068594 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-574568df96-8xgfl"] Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.080345 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fb7794c96-65b5k"] Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.087044 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5fb7794c96-65b5k"] Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.087103 4833 scope.go:117] "RemoveContainer" containerID="ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1" Feb 19 12:50:15 crc kubenswrapper[4833]: E0219 12:50:15.087442 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1\": container with ID starting with ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1 not found: ID does not exist" containerID="ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.087472 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1"} err="failed to get container status \"ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1\": rpc error: code = NotFound desc = could not find container \"ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1\": container with ID starting with ca4211d503d2b78912ed499c35a77280e33c30ff5dff960c7d317f85603099c1 not found: ID does not exist" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.745146 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.745253 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.745350 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.746097 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.746190 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16" gracePeriod=600 Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.808283 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz"] Feb 19 12:50:15 crc kubenswrapper[4833]: E0219 12:50:15.808590 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c7039d-02a0-46f9-8e57-37b9ee2d8188" containerName="controller-manager" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.808615 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c7039d-02a0-46f9-8e57-37b9ee2d8188" containerName="controller-manager" Feb 19 12:50:15 crc kubenswrapper[4833]: E0219 12:50:15.808629 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950b9cae-bb19-478e-b128-83968a16e80f" containerName="extract-content" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.808648 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="950b9cae-bb19-478e-b128-83968a16e80f" containerName="extract-content" Feb 19 12:50:15 crc kubenswrapper[4833]: E0219 12:50:15.808658 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950b9cae-bb19-478e-b128-83968a16e80f" containerName="extract-utilities" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.808668 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="950b9cae-bb19-478e-b128-83968a16e80f" containerName="extract-utilities" Feb 19 12:50:15 crc kubenswrapper[4833]: E0219 12:50:15.808679 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950b9cae-bb19-478e-b128-83968a16e80f" containerName="registry-server" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.808687 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="950b9cae-bb19-478e-b128-83968a16e80f" containerName="registry-server" Feb 19 12:50:15 crc kubenswrapper[4833]: E0219 12:50:15.808705 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dae7827-aadd-4279-9f5e-7eefc2b6bb46" containerName="route-controller-manager" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.808713 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dae7827-aadd-4279-9f5e-7eefc2b6bb46" containerName="route-controller-manager" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.808835 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c7039d-02a0-46f9-8e57-37b9ee2d8188" containerName="controller-manager" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.808855 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dae7827-aadd-4279-9f5e-7eefc2b6bb46" containerName="route-controller-manager" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.808865 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="950b9cae-bb19-478e-b128-83968a16e80f" containerName="registry-server" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.809409 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.810909 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk"] Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.811535 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.811835 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.812256 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.812306 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.812364 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.812481 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.813211 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.815135 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.816316 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.816449 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.816632 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.816750 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.816946 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.820864 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz"] Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.823419 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.834446 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk"] Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.865076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3ef8ef4-e396-483e-8802-d9315ee48e39-client-ca\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.865117 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-client-ca\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.865156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6jtn\" (UniqueName: \"kubernetes.io/projected/c3ef8ef4-e396-483e-8802-d9315ee48e39-kube-api-access-t6jtn\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.865185 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbdl4\" (UniqueName: \"kubernetes.io/projected/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-kube-api-access-nbdl4\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.865249 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ef8ef4-e396-483e-8802-d9315ee48e39-serving-cert\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.865272 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3ef8ef4-e396-483e-8802-d9315ee48e39-proxy-ca-bundles\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.865290 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-config\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.865314 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-serving-cert\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.865338 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3ef8ef4-e396-483e-8802-d9315ee48e39-config\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.966542 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbdl4\" (UniqueName: \"kubernetes.io/projected/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-kube-api-access-nbdl4\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.966922 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ef8ef4-e396-483e-8802-d9315ee48e39-serving-cert\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.966951 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3ef8ef4-e396-483e-8802-d9315ee48e39-proxy-ca-bundles\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.966980 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-config\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.967016 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-serving-cert\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.967051 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3ef8ef4-e396-483e-8802-d9315ee48e39-config\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.967099 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3ef8ef4-e396-483e-8802-d9315ee48e39-client-ca\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.967125 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-client-ca\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.967172 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6jtn\" (UniqueName: \"kubernetes.io/projected/c3ef8ef4-e396-483e-8802-d9315ee48e39-kube-api-access-t6jtn\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.968122 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-client-ca\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.968205 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c3ef8ef4-e396-483e-8802-d9315ee48e39-client-ca\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.968270 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c3ef8ef4-e396-483e-8802-d9315ee48e39-proxy-ca-bundles\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.968520 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-config\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.968856 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3ef8ef4-e396-483e-8802-d9315ee48e39-config\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.975184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3ef8ef4-e396-483e-8802-d9315ee48e39-serving-cert\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.975281 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-serving-cert\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.985965 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6jtn\" (UniqueName: \"kubernetes.io/projected/c3ef8ef4-e396-483e-8802-d9315ee48e39-kube-api-access-t6jtn\") pod \"controller-manager-5b6c6c8965-gd8dk\" (UID: \"c3ef8ef4-e396-483e-8802-d9315ee48e39\") " pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:15 crc kubenswrapper[4833]: I0219 12:50:15.986221 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbdl4\" (UniqueName: \"kubernetes.io/projected/8749f8b7-4ac9-4680-a911-b73b5f12bd7f-kube-api-access-nbdl4\") pod \"route-controller-manager-867b7b6f6c-4w5jz\" (UID: \"8749f8b7-4ac9-4680-a911-b73b5f12bd7f\") " pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:16 crc kubenswrapper[4833]: I0219 12:50:16.045530 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16" exitCode=0 Feb 19 12:50:16 crc kubenswrapper[4833]: I0219 12:50:16.045633 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16"} Feb 19 12:50:16 crc kubenswrapper[4833]: I0219 12:50:16.045697 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"bcc68a3815c8acc741b1eb062ad00066f331696d45bcdc1069fe57166e6a3a3c"} Feb 19 12:50:16 crc kubenswrapper[4833]: I0219 12:50:16.126428 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:16 crc kubenswrapper[4833]: I0219 12:50:16.133678 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:16 crc kubenswrapper[4833]: I0219 12:50:16.326777 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dae7827-aadd-4279-9f5e-7eefc2b6bb46" path="/var/lib/kubelet/pods/8dae7827-aadd-4279-9f5e-7eefc2b6bb46/volumes" Feb 19 12:50:16 crc kubenswrapper[4833]: I0219 12:50:16.328016 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6c7039d-02a0-46f9-8e57-37b9ee2d8188" path="/var/lib/kubelet/pods/b6c7039d-02a0-46f9-8e57-37b9ee2d8188/volumes" Feb 19 12:50:16 crc kubenswrapper[4833]: I0219 12:50:16.342481 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz"] Feb 19 12:50:16 crc kubenswrapper[4833]: W0219 12:50:16.353769 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8749f8b7_4ac9_4680_a911_b73b5f12bd7f.slice/crio-16d67f3bc5138ea93dd008f2aa8af1e6de32f55ef667de7779bc6966a47f7609 WatchSource:0}: Error finding container 16d67f3bc5138ea93dd008f2aa8af1e6de32f55ef667de7779bc6966a47f7609: Status 404 returned error can't find the container with id 16d67f3bc5138ea93dd008f2aa8af1e6de32f55ef667de7779bc6966a47f7609 Feb 19 12:50:16 crc kubenswrapper[4833]: I0219 12:50:16.398345 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk"] Feb 19 12:50:17 crc kubenswrapper[4833]: I0219 12:50:17.053169 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" event={"ID":"c3ef8ef4-e396-483e-8802-d9315ee48e39","Type":"ContainerStarted","Data":"06987237e827ffac3d7cd9af2e024dc74939711e381f54e1bc2a732ac28dd8b0"} Feb 19 12:50:17 crc kubenswrapper[4833]: I0219 12:50:17.053534 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" event={"ID":"c3ef8ef4-e396-483e-8802-d9315ee48e39","Type":"ContainerStarted","Data":"9dfc6b0751919ce1a7f7bb6551695cfd0e8793cafbe6c2c60af0eb38d485bb78"} Feb 19 12:50:17 crc kubenswrapper[4833]: I0219 12:50:17.053555 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:17 crc kubenswrapper[4833]: I0219 12:50:17.055816 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" event={"ID":"8749f8b7-4ac9-4680-a911-b73b5f12bd7f","Type":"ContainerStarted","Data":"5814ec72a53e8228ff11ecfb6abe0e18720b654e55839db64d252b7f19df9854"} Feb 19 12:50:17 crc kubenswrapper[4833]: I0219 12:50:17.055853 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" event={"ID":"8749f8b7-4ac9-4680-a911-b73b5f12bd7f","Type":"ContainerStarted","Data":"16d67f3bc5138ea93dd008f2aa8af1e6de32f55ef667de7779bc6966a47f7609"} Feb 19 12:50:17 crc kubenswrapper[4833]: I0219 12:50:17.056067 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:17 crc kubenswrapper[4833]: I0219 12:50:17.059153 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" Feb 19 12:50:17 crc kubenswrapper[4833]: I0219 12:50:17.067950 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" Feb 19 12:50:17 crc kubenswrapper[4833]: I0219 12:50:17.077345 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b6c6c8965-gd8dk" podStartSLOduration=3.077323062 podStartE2EDuration="3.077323062s" podCreationTimestamp="2026-02-19 12:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:50:17.071551132 +0000 UTC m=+227.467069920" watchObservedRunningTime="2026-02-19 12:50:17.077323062 +0000 UTC m=+227.472841840" Feb 19 12:50:17 crc kubenswrapper[4833]: I0219 12:50:17.117250 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-867b7b6f6c-4w5jz" podStartSLOduration=3.117232164 podStartE2EDuration="3.117232164s" podCreationTimestamp="2026-02-19 12:50:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:50:17.115083488 +0000 UTC m=+227.510602306" watchObservedRunningTime="2026-02-19 12:50:17.117232164 +0000 UTC m=+227.512750942" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.978868 4833 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.980340 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.981267 4833 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.983173 4833 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.983626 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7" gracePeriod=15 Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.983670 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94" gracePeriod=15 Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.983610 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7" gracePeriod=15 Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.983702 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a" gracePeriod=15 Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.983738 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888" gracePeriod=15 Feb 19 12:50:21 crc kubenswrapper[4833]: E0219 12:50:21.984221 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984244 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 12:50:21 crc kubenswrapper[4833]: E0219 12:50:21.984277 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984287 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 12:50:21 crc kubenswrapper[4833]: E0219 12:50:21.984302 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984312 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 12:50:21 crc kubenswrapper[4833]: E0219 12:50:21.984326 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984334 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 19 12:50:21 crc kubenswrapper[4833]: E0219 12:50:21.984347 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984355 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 12:50:21 crc kubenswrapper[4833]: E0219 12:50:21.984364 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984373 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 12:50:21 crc kubenswrapper[4833]: E0219 12:50:21.984387 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984395 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984584 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984601 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984616 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984633 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984658 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 19 12:50:21 crc kubenswrapper[4833]: I0219 12:50:21.984929 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.055072 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.055455 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.055488 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.055550 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.055599 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.055622 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.055651 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.055673 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156423 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156461 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156480 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156514 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156568 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156577 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156634 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156595 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156637 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156579 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156672 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156680 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156704 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156765 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:22 crc kubenswrapper[4833]: I0219 12:50:22.156778 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:23 crc kubenswrapper[4833]: I0219 12:50:23.089546 4833 generic.go:334] "Generic (PLEG): container finished" podID="2582174b-9c9d-465e-9f88-e249c815e8a0" containerID="0cf1232ab313ae99d4d8ab61b318baa4f6a01a638943d3aaed1642931dafc036" exitCode=0 Feb 19 12:50:23 crc kubenswrapper[4833]: I0219 12:50:23.089611 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2582174b-9c9d-465e-9f88-e249c815e8a0","Type":"ContainerDied","Data":"0cf1232ab313ae99d4d8ab61b318baa4f6a01a638943d3aaed1642931dafc036"} Feb 19 12:50:23 crc kubenswrapper[4833]: I0219 12:50:23.090615 4833 status_manager.go:851] "Failed to get status for pod" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:23 crc kubenswrapper[4833]: I0219 12:50:23.092649 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 19 12:50:23 crc kubenswrapper[4833]: I0219 12:50:23.094289 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 12:50:23 crc kubenswrapper[4833]: I0219 12:50:23.095037 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7" exitCode=0 Feb 19 12:50:23 crc kubenswrapper[4833]: I0219 12:50:23.095062 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a" exitCode=0 Feb 19 12:50:23 crc kubenswrapper[4833]: I0219 12:50:23.095075 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94" exitCode=0 Feb 19 12:50:23 crc kubenswrapper[4833]: I0219 12:50:23.095087 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888" exitCode=2 Feb 19 12:50:23 crc kubenswrapper[4833]: I0219 12:50:23.095123 4833 scope.go:117] "RemoveContainer" containerID="1be0264ef2a938b611106bf6969bf6b0aa34d663c5ceb65c70940750be3ed1b5" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.148929 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.509827 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.511053 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.512059 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.512622 4833 status_manager.go:851] "Failed to get status for pod" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.513896 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.516969 4833 status_manager.go:851] "Failed to get status for pod" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.517219 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.602813 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-var-lock\") pod \"2582174b-9c9d-465e-9f88-e249c815e8a0\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.602883 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-kubelet-dir\") pod \"2582174b-9c9d-465e-9f88-e249c815e8a0\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.602932 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.602971 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603004 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2582174b-9c9d-465e-9f88-e249c815e8a0-kube-api-access\") pod \"2582174b-9c9d-465e-9f88-e249c815e8a0\" (UID: \"2582174b-9c9d-465e-9f88-e249c815e8a0\") " Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603030 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603039 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-var-lock" (OuterVolumeSpecName: "var-lock") pod "2582174b-9c9d-465e-9f88-e249c815e8a0" (UID: "2582174b-9c9d-465e-9f88-e249c815e8a0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603071 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2582174b-9c9d-465e-9f88-e249c815e8a0" (UID: "2582174b-9c9d-465e-9f88-e249c815e8a0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603120 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603126 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603269 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603426 4833 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603458 4833 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603477 4833 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2582174b-9c9d-465e-9f88-e249c815e8a0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603515 4833 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.603531 4833 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.608259 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2582174b-9c9d-465e-9f88-e249c815e8a0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2582174b-9c9d-465e-9f88-e249c815e8a0" (UID: "2582174b-9c9d-465e-9f88-e249c815e8a0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:50:24 crc kubenswrapper[4833]: I0219 12:50:24.704708 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2582174b-9c9d-465e-9f88-e249c815e8a0-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.157808 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2582174b-9c9d-465e-9f88-e249c815e8a0","Type":"ContainerDied","Data":"2c2f5ce60abf7695bb5c0a797fb9f2c13c9083e2654240edd2a861e7cb833934"} Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.157839 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.157849 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c2f5ce60abf7695bb5c0a797fb9f2c13c9083e2654240edd2a861e7cb833934" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.160532 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.161097 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7" exitCode=0 Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.161152 4833 scope.go:117] "RemoveContainer" containerID="e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.161162 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.174121 4833 status_manager.go:851] "Failed to get status for pod" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.174475 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.182277 4833 status_manager.go:851] "Failed to get status for pod" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.182564 4833 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.191779 4833 scope.go:117] "RemoveContainer" containerID="44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.204268 4833 scope.go:117] "RemoveContainer" containerID="e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.223467 4833 scope.go:117] "RemoveContainer" containerID="edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.241422 4833 scope.go:117] "RemoveContainer" containerID="c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.262702 4833 scope.go:117] "RemoveContainer" containerID="6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.289937 4833 scope.go:117] "RemoveContainer" containerID="e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7" Feb 19 12:50:25 crc kubenswrapper[4833]: E0219 12:50:25.290474 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\": container with ID starting with e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7 not found: ID does not exist" containerID="e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.290520 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7"} err="failed to get container status \"e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\": rpc error: code = NotFound desc = could not find container \"e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7\": container with ID starting with e021cb86e921e57f9eda9f66eb459ff9f8ab81eba6135384eae6c96b7c8830c7 not found: ID does not exist" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.290544 4833 scope.go:117] "RemoveContainer" containerID="44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a" Feb 19 12:50:25 crc kubenswrapper[4833]: E0219 12:50:25.290740 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\": container with ID starting with 44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a not found: ID does not exist" containerID="44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.290759 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a"} err="failed to get container status \"44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\": rpc error: code = NotFound desc = could not find container \"44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a\": container with ID starting with 44f30eb00bf4ab3a2a447dfe031bc89bb7dc6d839ee2db9709f925ebc25c538a not found: ID does not exist" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.290776 4833 scope.go:117] "RemoveContainer" containerID="e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94" Feb 19 12:50:25 crc kubenswrapper[4833]: E0219 12:50:25.291731 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\": container with ID starting with e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94 not found: ID does not exist" containerID="e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.291758 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94"} err="failed to get container status \"e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\": rpc error: code = NotFound desc = could not find container \"e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94\": container with ID starting with e71cf442d912e6d3e45beed49dfc15bd976b433604856d024d8683048e91ef94 not found: ID does not exist" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.291773 4833 scope.go:117] "RemoveContainer" containerID="edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888" Feb 19 12:50:25 crc kubenswrapper[4833]: E0219 12:50:25.292612 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\": container with ID starting with edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888 not found: ID does not exist" containerID="edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.292639 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888"} err="failed to get container status \"edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\": rpc error: code = NotFound desc = could not find container \"edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888\": container with ID starting with edc5dc297cdbae7b565383ff1ba211674c87423e62489297731c2925d56b8888 not found: ID does not exist" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.292655 4833 scope.go:117] "RemoveContainer" containerID="c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7" Feb 19 12:50:25 crc kubenswrapper[4833]: E0219 12:50:25.292929 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\": container with ID starting with c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7 not found: ID does not exist" containerID="c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.292945 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7"} err="failed to get container status \"c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\": rpc error: code = NotFound desc = could not find container \"c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7\": container with ID starting with c3061652f5bf001886a475e5a31b8f9b7a376b3e083f993a5034d4f36b4162d7 not found: ID does not exist" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.292957 4833 scope.go:117] "RemoveContainer" containerID="6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e" Feb 19 12:50:25 crc kubenswrapper[4833]: E0219 12:50:25.293189 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\": container with ID starting with 6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e not found: ID does not exist" containerID="6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e" Feb 19 12:50:25 crc kubenswrapper[4833]: I0219 12:50:25.293211 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e"} err="failed to get container status \"6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\": rpc error: code = NotFound desc = could not find container \"6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e\": container with ID starting with 6b39b338b0ad5577be74356f0f5727636add260e824247ae881971f968e06d8e not found: ID does not exist" Feb 19 12:50:26 crc kubenswrapper[4833]: I0219 12:50:26.321046 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 19 12:50:27 crc kubenswrapper[4833]: E0219 12:50:27.032893 4833 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:27 crc kubenswrapper[4833]: I0219 12:50:27.034110 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:27 crc kubenswrapper[4833]: W0219 12:50:27.056723 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ad9e228f959a0e5e2bc739421ff7d5cdcdf609f925b67a68cb5a0112dacb2cc5 WatchSource:0}: Error finding container ad9e228f959a0e5e2bc739421ff7d5cdcdf609f925b67a68cb5a0112dacb2cc5: Status 404 returned error can't find the container with id ad9e228f959a0e5e2bc739421ff7d5cdcdf609f925b67a68cb5a0112dacb2cc5 Feb 19 12:50:27 crc kubenswrapper[4833]: E0219 12:50:27.059651 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895a6ccd1883171 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 12:50:27.059061105 +0000 UTC m=+237.454579873,LastTimestamp:2026-02-19 12:50:27.059061105 +0000 UTC m=+237.454579873,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 12:50:27 crc kubenswrapper[4833]: I0219 12:50:27.176234 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ad9e228f959a0e5e2bc739421ff7d5cdcdf609f925b67a68cb5a0112dacb2cc5"} Feb 19 12:50:28 crc kubenswrapper[4833]: I0219 12:50:28.185384 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b"} Feb 19 12:50:28 crc kubenswrapper[4833]: E0219 12:50:28.192062 4833 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:28 crc kubenswrapper[4833]: I0219 12:50:28.192061 4833 status_manager.go:851] "Failed to get status for pod" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:29 crc kubenswrapper[4833]: E0219 12:50:29.194943 4833 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:50:29 crc kubenswrapper[4833]: E0219 12:50:29.667027 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:29 crc kubenswrapper[4833]: E0219 12:50:29.667680 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:29 crc kubenswrapper[4833]: E0219 12:50:29.668139 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:29 crc kubenswrapper[4833]: E0219 12:50:29.668560 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:29 crc kubenswrapper[4833]: E0219 12:50:29.668982 4833 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:29 crc kubenswrapper[4833]: I0219 12:50:29.669048 4833 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 12:50:29 crc kubenswrapper[4833]: E0219 12:50:29.669439 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="200ms" Feb 19 12:50:29 crc kubenswrapper[4833]: E0219 12:50:29.871433 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="400ms" Feb 19 12:50:30 crc kubenswrapper[4833]: E0219 12:50:30.217320 4833 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.222:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1895a6ccd1883171 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 12:50:27.059061105 +0000 UTC m=+237.454579873,LastTimestamp:2026-02-19 12:50:27.059061105 +0000 UTC m=+237.454579873,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 12:50:30 crc kubenswrapper[4833]: E0219 12:50:30.272878 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="800ms" Feb 19 12:50:30 crc kubenswrapper[4833]: I0219 12:50:30.323734 4833 status_manager.go:851] "Failed to get status for pod" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:31 crc kubenswrapper[4833]: E0219 12:50:31.073953 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="1.6s" Feb 19 12:50:32 crc kubenswrapper[4833]: E0219 12:50:32.675982 4833 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.222:6443: connect: connection refused" interval="3.2s" Feb 19 12:50:34 crc kubenswrapper[4833]: I0219 12:50:34.314845 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:34 crc kubenswrapper[4833]: I0219 12:50:34.316878 4833 status_manager.go:851] "Failed to get status for pod" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:34 crc kubenswrapper[4833]: I0219 12:50:34.342740 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:34 crc kubenswrapper[4833]: I0219 12:50:34.342785 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:34 crc kubenswrapper[4833]: E0219 12:50:34.343916 4833 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:34 crc kubenswrapper[4833]: I0219 12:50:34.345024 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.239578 4833 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="67d6572749980a7837ec225b79286b58dc61ac3f584f0e9f729086162f17cc59" exitCode=0 Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.239696 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"67d6572749980a7837ec225b79286b58dc61ac3f584f0e9f729086162f17cc59"} Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.240048 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"442d0dd9d99cbc4d7831c476582099431257d7543c244c36e6f5c823b4770248"} Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.240420 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.240451 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.241189 4833 status_manager.go:851] "Failed to get status for pod" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:35 crc kubenswrapper[4833]: E0219 12:50:35.241211 4833 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.244885 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.244965 4833 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e" exitCode=1 Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.245011 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e"} Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.245741 4833 scope.go:117] "RemoveContainer" containerID="be4ef1418f9464e95e739e3a543b14668c04159065dea6093d086d75a32d919e" Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.246092 4833 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:35 crc kubenswrapper[4833]: I0219 12:50:35.247306 4833 status_manager.go:851] "Failed to get status for pod" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.222:6443: connect: connection refused" Feb 19 12:50:36 crc kubenswrapper[4833]: I0219 12:50:36.256856 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e703937c0cdf777cfa66dd072679b11d3ecda381efbec75d604e8516121fe9d7"} Feb 19 12:50:36 crc kubenswrapper[4833]: I0219 12:50:36.257178 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0fdef1a67d926a8e3afe28c56d946ca18b91a132305096cfb72095cc39e5bf0a"} Feb 19 12:50:36 crc kubenswrapper[4833]: I0219 12:50:36.260329 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 19 12:50:36 crc kubenswrapper[4833]: I0219 12:50:36.260376 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5d23bb64a57d3efce4d4449135609aa9e2f8eac527fe7cd47552025b90dae14c"} Feb 19 12:50:37 crc kubenswrapper[4833]: I0219 12:50:37.206157 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:50:37 crc kubenswrapper[4833]: I0219 12:50:37.267645 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e687a72cd6edd8cbbad4360b6039ae888d8c8c85558da8e7d7554891096d1796"} Feb 19 12:50:37 crc kubenswrapper[4833]: I0219 12:50:37.267708 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"550a774415e83705c07c617f46787cb7f4619bc6dd972de7ba8baa9a38db3f39"} Feb 19 12:50:37 crc kubenswrapper[4833]: I0219 12:50:37.267731 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0b950cc73df917c62ddd50c7f1281a0516ae1831fc002afb0074fe6d5798018d"} Feb 19 12:50:37 crc kubenswrapper[4833]: I0219 12:50:37.268050 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:37 crc kubenswrapper[4833]: I0219 12:50:37.268078 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:39 crc kubenswrapper[4833]: I0219 12:50:39.344780 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:39 crc kubenswrapper[4833]: I0219 12:50:39.345137 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:39 crc kubenswrapper[4833]: I0219 12:50:39.355257 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:41 crc kubenswrapper[4833]: I0219 12:50:41.233682 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:50:41 crc kubenswrapper[4833]: I0219 12:50:41.240604 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:50:42 crc kubenswrapper[4833]: I0219 12:50:42.286064 4833 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:43 crc kubenswrapper[4833]: I0219 12:50:43.315307 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:43 crc kubenswrapper[4833]: I0219 12:50:43.315462 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:43 crc kubenswrapper[4833]: I0219 12:50:43.315898 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:43 crc kubenswrapper[4833]: I0219 12:50:43.322597 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:43 crc kubenswrapper[4833]: I0219 12:50:43.325749 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="66fc0a2f-b181-4f37-9bac-f9d97b9db65c" Feb 19 12:50:44 crc kubenswrapper[4833]: I0219 12:50:44.321337 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:44 crc kubenswrapper[4833]: I0219 12:50:44.321403 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:45 crc kubenswrapper[4833]: I0219 12:50:45.328209 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:45 crc kubenswrapper[4833]: I0219 12:50:45.328254 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:47 crc kubenswrapper[4833]: I0219 12:50:47.212354 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 12:50:50 crc kubenswrapper[4833]: I0219 12:50:50.346415 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="66fc0a2f-b181-4f37-9bac-f9d97b9db65c" Feb 19 12:50:53 crc kubenswrapper[4833]: I0219 12:50:53.134100 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 19 12:50:53 crc kubenswrapper[4833]: I0219 12:50:53.577825 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 19 12:50:53 crc kubenswrapper[4833]: I0219 12:50:53.635420 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 19 12:50:53 crc kubenswrapper[4833]: I0219 12:50:53.692148 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 19 12:50:53 crc kubenswrapper[4833]: I0219 12:50:53.729620 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 19 12:50:53 crc kubenswrapper[4833]: I0219 12:50:53.811194 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 19 12:50:54 crc kubenswrapper[4833]: I0219 12:50:54.148139 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 19 12:50:54 crc kubenswrapper[4833]: I0219 12:50:54.309882 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 19 12:50:54 crc kubenswrapper[4833]: I0219 12:50:54.569355 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 12:50:54 crc kubenswrapper[4833]: I0219 12:50:54.741308 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 19 12:50:54 crc kubenswrapper[4833]: I0219 12:50:54.784774 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 19 12:50:54 crc kubenswrapper[4833]: I0219 12:50:54.955843 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 19 12:50:54 crc kubenswrapper[4833]: I0219 12:50:54.957813 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 19 12:50:54 crc kubenswrapper[4833]: I0219 12:50:54.960414 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 19 12:50:55 crc kubenswrapper[4833]: I0219 12:50:55.259722 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 19 12:50:55 crc kubenswrapper[4833]: I0219 12:50:55.291794 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 19 12:50:55 crc kubenswrapper[4833]: I0219 12:50:55.371401 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 19 12:50:55 crc kubenswrapper[4833]: I0219 12:50:55.402372 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 19 12:50:55 crc kubenswrapper[4833]: I0219 12:50:55.437627 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 19 12:50:55 crc kubenswrapper[4833]: I0219 12:50:55.507211 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 19 12:50:55 crc kubenswrapper[4833]: I0219 12:50:55.599278 4833 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 19 12:50:55 crc kubenswrapper[4833]: I0219 12:50:55.614475 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 19 12:50:55 crc kubenswrapper[4833]: I0219 12:50:55.799467 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 19 12:50:55 crc kubenswrapper[4833]: I0219 12:50:55.828373 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 19 12:50:55 crc kubenswrapper[4833]: I0219 12:50:55.978189 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.084963 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.110759 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.139090 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.146387 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.354956 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.365637 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.498172 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.557639 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.563323 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.571680 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.594742 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.690418 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.718911 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.834060 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.862149 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.871341 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.888977 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.952439 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 19 12:50:56 crc kubenswrapper[4833]: I0219 12:50:56.971254 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.186121 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.189605 4833 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.253988 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.256629 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.371106 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.425377 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.466669 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.497920 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.501620 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.539539 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.643258 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.688428 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.735345 4833 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.739782 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.739929 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.740311 4833 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.740345 4833 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="59c103b3-9730-4f0a-b308-8deb1a89ec5f" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.746468 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.801335 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.801310459 podStartE2EDuration="15.801310459s" podCreationTimestamp="2026-02-19 12:50:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:50:57.773932188 +0000 UTC m=+268.169450976" watchObservedRunningTime="2026-02-19 12:50:57.801310459 +0000 UTC m=+268.196829237" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.917330 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 19 12:50:57 crc kubenswrapper[4833]: I0219 12:50:57.919296 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.005917 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.051270 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.092620 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.138401 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.306907 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.319310 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.390772 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.429150 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.755863 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.828117 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.857866 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.916154 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.925651 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.938378 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 19 12:50:58 crc kubenswrapper[4833]: I0219 12:50:58.940328 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.024694 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.081319 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.194952 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.402161 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.405176 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.416732 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.530113 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.683969 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.711315 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.808149 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.846319 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 19 12:50:59 crc kubenswrapper[4833]: I0219 12:50:59.932637 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.008470 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.075466 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.145829 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.194707 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.255735 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.282816 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.298308 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.325199 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.381629 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.698451 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.791961 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.805608 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.811607 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.885586 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.931074 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 19 12:51:00 crc kubenswrapper[4833]: I0219 12:51:00.988481 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.008337 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.027121 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.109121 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.121803 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.285596 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.369736 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.430852 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.517743 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.558543 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.575303 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.575627 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.581860 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.652353 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.671490 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.710576 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.710848 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.776108 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.797242 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.845347 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.848128 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.920294 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 19 12:51:01 crc kubenswrapper[4833]: I0219 12:51:01.964690 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.084624 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.126467 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.251094 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.283084 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.311725 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.334023 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.375610 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.382091 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.382866 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.388944 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.438060 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.567026 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.620060 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.640242 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.653444 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.787133 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.794558 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.802795 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.855718 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.952839 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.962064 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.966373 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 19 12:51:02 crc kubenswrapper[4833]: I0219 12:51:02.980239 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.034738 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.081238 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.263059 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.322233 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.327937 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.328558 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.390677 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.427057 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.494611 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.555623 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.582661 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.611548 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.618646 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.646581 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.784237 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.886348 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 19 12:51:03 crc kubenswrapper[4833]: I0219 12:51:03.999612 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.070573 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.110526 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.117985 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.203016 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.232274 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.248114 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.255760 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.257647 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.272215 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.296474 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.310443 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.324937 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.325771 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.362755 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.434253 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.455595 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.520870 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.545358 4833 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.689996 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.836311 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.852066 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.882425 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.925406 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.927444 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.953489 4833 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.953878 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b" gracePeriod=5 Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.970881 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 19 12:51:04 crc kubenswrapper[4833]: I0219 12:51:04.973374 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.021640 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.059688 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.098529 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.150295 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.167736 4833 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.183840 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.221019 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.247954 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.249708 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.268656 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.319509 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.329844 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.375973 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.378478 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.421457 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.431951 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.483726 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.612847 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.659812 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.694000 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.701247 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.778564 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.806837 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.808952 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.821109 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 19 12:51:05 crc kubenswrapper[4833]: I0219 12:51:05.911679 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.025707 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.033324 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.062108 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.091801 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.121929 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.153324 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.324747 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.379621 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.385333 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.400226 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.416397 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.741251 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 19 12:51:06 crc kubenswrapper[4833]: I0219 12:51:06.819772 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 19 12:51:07 crc kubenswrapper[4833]: I0219 12:51:07.201689 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 19 12:51:07 crc kubenswrapper[4833]: I0219 12:51:07.539655 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 19 12:51:07 crc kubenswrapper[4833]: I0219 12:51:07.818180 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 19 12:51:07 crc kubenswrapper[4833]: I0219 12:51:07.856169 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 19 12:51:07 crc kubenswrapper[4833]: I0219 12:51:07.914293 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 19 12:51:07 crc kubenswrapper[4833]: I0219 12:51:07.960385 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 19 12:51:08 crc kubenswrapper[4833]: I0219 12:51:08.064182 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 19 12:51:08 crc kubenswrapper[4833]: I0219 12:51:08.399190 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 19 12:51:08 crc kubenswrapper[4833]: I0219 12:51:08.708721 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 19 12:51:08 crc kubenswrapper[4833]: I0219 12:51:08.794340 4833 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 19 12:51:08 crc kubenswrapper[4833]: I0219 12:51:08.842296 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 19 12:51:08 crc kubenswrapper[4833]: I0219 12:51:08.891989 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 19 12:51:08 crc kubenswrapper[4833]: I0219 12:51:08.956881 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 19 12:51:08 crc kubenswrapper[4833]: I0219 12:51:08.961653 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 19 12:51:09 crc kubenswrapper[4833]: I0219 12:51:09.080642 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 19 12:51:09 crc kubenswrapper[4833]: I0219 12:51:09.395458 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.132363 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.132859 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.241694 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.241808 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.241852 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.241892 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.241910 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.241989 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.242040 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.242053 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.242236 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.242614 4833 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.242644 4833 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.242662 4833 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.242681 4833 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.255008 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.325993 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.326851 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.328769 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.344583 4833 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.442933 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.486475 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.486603 4833 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b" exitCode=137 Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.486674 4833 scope.go:117] "RemoveContainer" containerID="56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.486855 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.500794 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.515588 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.516553 4833 scope.go:117] "RemoveContainer" containerID="56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b" Feb 19 12:51:10 crc kubenswrapper[4833]: E0219 12:51:10.517339 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b\": container with ID starting with 56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b not found: ID does not exist" containerID="56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.517425 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b"} err="failed to get container status \"56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b\": rpc error: code = NotFound desc = could not find container \"56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b\": container with ID starting with 56b63c8a7c41465187f164c49b42150a1143f67832557e299e0d250dd14e7f3b not found: ID does not exist" Feb 19 12:51:10 crc kubenswrapper[4833]: I0219 12:51:10.918065 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 19 12:51:11 crc kubenswrapper[4833]: I0219 12:51:11.676809 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 19 12:51:27 crc kubenswrapper[4833]: I0219 12:51:27.608759 4833 generic.go:334] "Generic (PLEG): container finished" podID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerID="10bbac755570dda634d27a65383cd694922862579fc50dc9a9c28b42321c7899" exitCode=0 Feb 19 12:51:27 crc kubenswrapper[4833]: I0219 12:51:27.608909 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" event={"ID":"7d7f1229-1f55-416b-beeb-60a3ae0abc62","Type":"ContainerDied","Data":"10bbac755570dda634d27a65383cd694922862579fc50dc9a9c28b42321c7899"} Feb 19 12:51:27 crc kubenswrapper[4833]: I0219 12:51:27.610311 4833 scope.go:117] "RemoveContainer" containerID="10bbac755570dda634d27a65383cd694922862579fc50dc9a9c28b42321c7899" Feb 19 12:51:28 crc kubenswrapper[4833]: I0219 12:51:28.618867 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" event={"ID":"7d7f1229-1f55-416b-beeb-60a3ae0abc62","Type":"ContainerStarted","Data":"080ef65a7973e78ff1e3f33a31141924d7ab8b832ba5d87a8bbff76905cad55d"} Feb 19 12:51:28 crc kubenswrapper[4833]: I0219 12:51:28.619675 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:51:28 crc kubenswrapper[4833]: I0219 12:51:28.621614 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:51:30 crc kubenswrapper[4833]: I0219 12:51:30.092651 4833 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 19 12:52:15 crc kubenswrapper[4833]: I0219 12:52:15.745835 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:52:15 crc kubenswrapper[4833]: I0219 12:52:15.746801 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.050322 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7gfdx"] Feb 19 12:52:41 crc kubenswrapper[4833]: E0219 12:52:41.051321 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.051341 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 12:52:41 crc kubenswrapper[4833]: E0219 12:52:41.051362 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" containerName="installer" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.051372 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" containerName="installer" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.051558 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2582174b-9c9d-465e-9f88-e249c815e8a0" containerName="installer" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.051580 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.052143 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.078546 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7gfdx"] Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.204420 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-registry-tls\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.204470 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-trusted-ca\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.204519 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.204552 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.204580 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-registry-certificates\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.204620 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zx57\" (UniqueName: \"kubernetes.io/projected/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-kube-api-access-9zx57\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.204664 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-bound-sa-token\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.204686 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.231106 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.306726 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zx57\" (UniqueName: \"kubernetes.io/projected/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-kube-api-access-9zx57\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.306881 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-bound-sa-token\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.306939 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.307052 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-registry-tls\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.307098 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-trusted-ca\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.307164 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.307213 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-registry-certificates\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.309474 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-trusted-ca\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.310042 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.310110 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-registry-certificates\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.312716 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.312796 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-registry-tls\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.327992 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zx57\" (UniqueName: \"kubernetes.io/projected/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-kube-api-access-9zx57\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.332819 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3ab81d4b-2145-45eb-a84e-6948a09c4c2f-bound-sa-token\") pod \"image-registry-66df7c8f76-7gfdx\" (UID: \"3ab81d4b-2145-45eb-a84e-6948a09c4c2f\") " pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.409711 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:41 crc kubenswrapper[4833]: I0219 12:52:41.652309 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7gfdx"] Feb 19 12:52:42 crc kubenswrapper[4833]: I0219 12:52:42.116080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" event={"ID":"3ab81d4b-2145-45eb-a84e-6948a09c4c2f","Type":"ContainerStarted","Data":"9e709882c7500bf6b75f8147c048f2871a83c3a4377617d08e1948e23631f152"} Feb 19 12:52:42 crc kubenswrapper[4833]: I0219 12:52:42.117069 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" event={"ID":"3ab81d4b-2145-45eb-a84e-6948a09c4c2f","Type":"ContainerStarted","Data":"41c4251807451bed750aa5933be26ba7ec576a2bddf9002fdadd782ed749fb1a"} Feb 19 12:52:42 crc kubenswrapper[4833]: I0219 12:52:42.117114 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:52:42 crc kubenswrapper[4833]: I0219 12:52:42.146707 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" podStartSLOduration=1.146688807 podStartE2EDuration="1.146688807s" podCreationTimestamp="2026-02-19 12:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:52:42.143302004 +0000 UTC m=+372.538820792" watchObservedRunningTime="2026-02-19 12:52:42.146688807 +0000 UTC m=+372.542207585" Feb 19 12:52:45 crc kubenswrapper[4833]: I0219 12:52:45.745073 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:52:45 crc kubenswrapper[4833]: I0219 12:52:45.745450 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.863873 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zfz65"] Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.864827 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zfz65" podUID="248c3a65-f82e-475e-9d61-502028f6c2cc" containerName="registry-server" containerID="cri-o://ceeb2b513f06542a1952246ae8af59eadc3c6aa360913d8ac2426b1955819818" gracePeriod=30 Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.890695 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jkz78"] Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.891104 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jkz78" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerName="registry-server" containerID="cri-o://5416347737fb82cb710e02ab246acafbe88170e54331bb37f4de5ad2aaa68356" gracePeriod=30 Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.908818 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zw6vx"] Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.909082 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" podUID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerName="marketplace-operator" containerID="cri-o://080ef65a7973e78ff1e3f33a31141924d7ab8b832ba5d87a8bbff76905cad55d" gracePeriod=30 Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.914244 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjrfv"] Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.914486 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pjrfv" podUID="01feede9-207a-499b-aee1-0fcde52463d6" containerName="registry-server" containerID="cri-o://11c67ecb15a56d729e586fc2a27c91452fc2511d378db23750f24c7ab8af476e" gracePeriod=30 Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.922619 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c78pj"] Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.923845 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.927663 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z7wn9"] Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.927861 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z7wn9" podUID="1011b353-4bd1-4087-b510-22d34e72e48b" containerName="registry-server" containerID="cri-o://7a97ba92fa433c65e4027908a37bce3478be5f99c029477d2e8ea64f942700a3" gracePeriod=30 Feb 19 12:52:52 crc kubenswrapper[4833]: I0219 12:52:52.931881 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c78pj"] Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.079314 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29007976-47bd-4251-8d1e-043d4c87270d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c78pj\" (UID: \"29007976-47bd-4251-8d1e-043d4c87270d\") " pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.079699 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29007976-47bd-4251-8d1e-043d4c87270d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c78pj\" (UID: \"29007976-47bd-4251-8d1e-043d4c87270d\") " pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.079736 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lw9g\" (UniqueName: \"kubernetes.io/projected/29007976-47bd-4251-8d1e-043d4c87270d-kube-api-access-6lw9g\") pod \"marketplace-operator-79b997595-c78pj\" (UID: \"29007976-47bd-4251-8d1e-043d4c87270d\") " pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.180013 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29007976-47bd-4251-8d1e-043d4c87270d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c78pj\" (UID: \"29007976-47bd-4251-8d1e-043d4c87270d\") " pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.180070 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lw9g\" (UniqueName: \"kubernetes.io/projected/29007976-47bd-4251-8d1e-043d4c87270d-kube-api-access-6lw9g\") pod \"marketplace-operator-79b997595-c78pj\" (UID: \"29007976-47bd-4251-8d1e-043d4c87270d\") " pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.180102 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29007976-47bd-4251-8d1e-043d4c87270d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c78pj\" (UID: \"29007976-47bd-4251-8d1e-043d4c87270d\") " pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.181592 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/29007976-47bd-4251-8d1e-043d4c87270d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-c78pj\" (UID: \"29007976-47bd-4251-8d1e-043d4c87270d\") " pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.184990 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/29007976-47bd-4251-8d1e-043d4c87270d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-c78pj\" (UID: \"29007976-47bd-4251-8d1e-043d4c87270d\") " pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.189887 4833 generic.go:334] "Generic (PLEG): container finished" podID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerID="080ef65a7973e78ff1e3f33a31141924d7ab8b832ba5d87a8bbff76905cad55d" exitCode=0 Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.189943 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" event={"ID":"7d7f1229-1f55-416b-beeb-60a3ae0abc62","Type":"ContainerDied","Data":"080ef65a7973e78ff1e3f33a31141924d7ab8b832ba5d87a8bbff76905cad55d"} Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.190110 4833 scope.go:117] "RemoveContainer" containerID="10bbac755570dda634d27a65383cd694922862579fc50dc9a9c28b42321c7899" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.193809 4833 generic.go:334] "Generic (PLEG): container finished" podID="1011b353-4bd1-4087-b510-22d34e72e48b" containerID="7a97ba92fa433c65e4027908a37bce3478be5f99c029477d2e8ea64f942700a3" exitCode=0 Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.193876 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7wn9" event={"ID":"1011b353-4bd1-4087-b510-22d34e72e48b","Type":"ContainerDied","Data":"7a97ba92fa433c65e4027908a37bce3478be5f99c029477d2e8ea64f942700a3"} Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.196892 4833 generic.go:334] "Generic (PLEG): container finished" podID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerID="5416347737fb82cb710e02ab246acafbe88170e54331bb37f4de5ad2aaa68356" exitCode=0 Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.196952 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkz78" event={"ID":"3a9e548c-9edf-4dc6-83a7-4f07f6960721","Type":"ContainerDied","Data":"5416347737fb82cb710e02ab246acafbe88170e54331bb37f4de5ad2aaa68356"} Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.197782 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lw9g\" (UniqueName: \"kubernetes.io/projected/29007976-47bd-4251-8d1e-043d4c87270d-kube-api-access-6lw9g\") pod \"marketplace-operator-79b997595-c78pj\" (UID: \"29007976-47bd-4251-8d1e-043d4c87270d\") " pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.200636 4833 generic.go:334] "Generic (PLEG): container finished" podID="248c3a65-f82e-475e-9d61-502028f6c2cc" containerID="ceeb2b513f06542a1952246ae8af59eadc3c6aa360913d8ac2426b1955819818" exitCode=0 Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.200692 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfz65" event={"ID":"248c3a65-f82e-475e-9d61-502028f6c2cc","Type":"ContainerDied","Data":"ceeb2b513f06542a1952246ae8af59eadc3c6aa360913d8ac2426b1955819818"} Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.200717 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zfz65" event={"ID":"248c3a65-f82e-475e-9d61-502028f6c2cc","Type":"ContainerDied","Data":"94607577f097d2ab6fe1bfe83bef77dce5f6709d413b0fb41ff5bf479959b0e1"} Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.200728 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94607577f097d2ab6fe1bfe83bef77dce5f6709d413b0fb41ff5bf479959b0e1" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.206550 4833 generic.go:334] "Generic (PLEG): container finished" podID="01feede9-207a-499b-aee1-0fcde52463d6" containerID="11c67ecb15a56d729e586fc2a27c91452fc2511d378db23750f24c7ab8af476e" exitCode=0 Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.206585 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjrfv" event={"ID":"01feede9-207a-499b-aee1-0fcde52463d6","Type":"ContainerDied","Data":"11c67ecb15a56d729e586fc2a27c91452fc2511d378db23750f24c7ab8af476e"} Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.233197 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.237797 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.261088 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.315991 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.334460 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.385171 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9dss\" (UniqueName: \"kubernetes.io/projected/248c3a65-f82e-475e-9d61-502028f6c2cc-kube-api-access-t9dss\") pod \"248c3a65-f82e-475e-9d61-502028f6c2cc\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.386000 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.386458 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75wj8\" (UniqueName: \"kubernetes.io/projected/01feede9-207a-499b-aee1-0fcde52463d6-kube-api-access-75wj8\") pod \"01feede9-207a-499b-aee1-0fcde52463d6\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.386722 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-operator-metrics\") pod \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.386760 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-utilities\") pod \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.386782 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-utilities\") pod \"01feede9-207a-499b-aee1-0fcde52463d6\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.386837 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-catalog-content\") pod \"01feede9-207a-499b-aee1-0fcde52463d6\" (UID: \"01feede9-207a-499b-aee1-0fcde52463d6\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.386894 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-trusted-ca\") pod \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.386941 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58b7q\" (UniqueName: \"kubernetes.io/projected/1011b353-4bd1-4087-b510-22d34e72e48b-kube-api-access-58b7q\") pod \"1011b353-4bd1-4087-b510-22d34e72e48b\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.386967 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-utilities\") pod \"1011b353-4bd1-4087-b510-22d34e72e48b\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.386989 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcnrz\" (UniqueName: \"kubernetes.io/projected/7d7f1229-1f55-416b-beeb-60a3ae0abc62-kube-api-access-lcnrz\") pod \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\" (UID: \"7d7f1229-1f55-416b-beeb-60a3ae0abc62\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.387012 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-catalog-content\") pod \"248c3a65-f82e-475e-9d61-502028f6c2cc\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.387044 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-catalog-content\") pod \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.387066 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-catalog-content\") pod \"1011b353-4bd1-4087-b510-22d34e72e48b\" (UID: \"1011b353-4bd1-4087-b510-22d34e72e48b\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.387088 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql2nm\" (UniqueName: \"kubernetes.io/projected/3a9e548c-9edf-4dc6-83a7-4f07f6960721-kube-api-access-ql2nm\") pod \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\" (UID: \"3a9e548c-9edf-4dc6-83a7-4f07f6960721\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.387110 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-utilities\") pod \"248c3a65-f82e-475e-9d61-502028f6c2cc\" (UID: \"248c3a65-f82e-475e-9d61-502028f6c2cc\") " Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.387580 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-utilities" (OuterVolumeSpecName: "utilities") pod "3a9e548c-9edf-4dc6-83a7-4f07f6960721" (UID: "3a9e548c-9edf-4dc6-83a7-4f07f6960721"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.388877 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-utilities" (OuterVolumeSpecName: "utilities") pod "248c3a65-f82e-475e-9d61-502028f6c2cc" (UID: "248c3a65-f82e-475e-9d61-502028f6c2cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.395025 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7d7f1229-1f55-416b-beeb-60a3ae0abc62" (UID: "7d7f1229-1f55-416b-beeb-60a3ae0abc62"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.396553 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1011b353-4bd1-4087-b510-22d34e72e48b-kube-api-access-58b7q" (OuterVolumeSpecName: "kube-api-access-58b7q") pod "1011b353-4bd1-4087-b510-22d34e72e48b" (UID: "1011b353-4bd1-4087-b510-22d34e72e48b"). InnerVolumeSpecName "kube-api-access-58b7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.409778 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/248c3a65-f82e-475e-9d61-502028f6c2cc-kube-api-access-t9dss" (OuterVolumeSpecName: "kube-api-access-t9dss") pod "248c3a65-f82e-475e-9d61-502028f6c2cc" (UID: "248c3a65-f82e-475e-9d61-502028f6c2cc"). InnerVolumeSpecName "kube-api-access-t9dss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.412148 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-utilities" (OuterVolumeSpecName: "utilities") pod "1011b353-4bd1-4087-b510-22d34e72e48b" (UID: "1011b353-4bd1-4087-b510-22d34e72e48b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.412602 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7f1229-1f55-416b-beeb-60a3ae0abc62-kube-api-access-lcnrz" (OuterVolumeSpecName: "kube-api-access-lcnrz") pod "7d7f1229-1f55-416b-beeb-60a3ae0abc62" (UID: "7d7f1229-1f55-416b-beeb-60a3ae0abc62"). InnerVolumeSpecName "kube-api-access-lcnrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.412978 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a9e548c-9edf-4dc6-83a7-4f07f6960721-kube-api-access-ql2nm" (OuterVolumeSpecName: "kube-api-access-ql2nm") pod "3a9e548c-9edf-4dc6-83a7-4f07f6960721" (UID: "3a9e548c-9edf-4dc6-83a7-4f07f6960721"). InnerVolumeSpecName "kube-api-access-ql2nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.414581 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7d7f1229-1f55-416b-beeb-60a3ae0abc62" (UID: "7d7f1229-1f55-416b-beeb-60a3ae0abc62"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.416413 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01feede9-207a-499b-aee1-0fcde52463d6-kube-api-access-75wj8" (OuterVolumeSpecName: "kube-api-access-75wj8") pod "01feede9-207a-499b-aee1-0fcde52463d6" (UID: "01feede9-207a-499b-aee1-0fcde52463d6"). InnerVolumeSpecName "kube-api-access-75wj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.419579 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-utilities" (OuterVolumeSpecName: "utilities") pod "01feede9-207a-499b-aee1-0fcde52463d6" (UID: "01feede9-207a-499b-aee1-0fcde52463d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.463431 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01feede9-207a-499b-aee1-0fcde52463d6" (UID: "01feede9-207a-499b-aee1-0fcde52463d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.488272 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "248c3a65-f82e-475e-9d61-502028f6c2cc" (UID: "248c3a65-f82e-475e-9d61-502028f6c2cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.488899 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58b7q\" (UniqueName: \"kubernetes.io/projected/1011b353-4bd1-4087-b510-22d34e72e48b-kube-api-access-58b7q\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.488922 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.488937 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcnrz\" (UniqueName: \"kubernetes.io/projected/7d7f1229-1f55-416b-beeb-60a3ae0abc62-kube-api-access-lcnrz\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.488949 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.488961 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql2nm\" (UniqueName: \"kubernetes.io/projected/3a9e548c-9edf-4dc6-83a7-4f07f6960721-kube-api-access-ql2nm\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.488972 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/248c3a65-f82e-475e-9d61-502028f6c2cc-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.488983 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9dss\" (UniqueName: \"kubernetes.io/projected/248c3a65-f82e-475e-9d61-502028f6c2cc-kube-api-access-t9dss\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.488994 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75wj8\" (UniqueName: \"kubernetes.io/projected/01feede9-207a-499b-aee1-0fcde52463d6-kube-api-access-75wj8\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.489006 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.489019 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.489030 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.489041 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01feede9-207a-499b-aee1-0fcde52463d6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.489052 4833 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7d7f1229-1f55-416b-beeb-60a3ae0abc62-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.491612 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a9e548c-9edf-4dc6-83a7-4f07f6960721" (UID: "3a9e548c-9edf-4dc6-83a7-4f07f6960721"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.559235 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1011b353-4bd1-4087-b510-22d34e72e48b" (UID: "1011b353-4bd1-4087-b510-22d34e72e48b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.589948 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a9e548c-9edf-4dc6-83a7-4f07f6960721-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.589978 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1011b353-4bd1-4087-b510-22d34e72e48b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:52:53 crc kubenswrapper[4833]: I0219 12:52:53.761804 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-c78pj"] Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.216192 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjrfv" event={"ID":"01feede9-207a-499b-aee1-0fcde52463d6","Type":"ContainerDied","Data":"0f2daf397ded588088d0799135730cd745cb60af0159f84dabb2e0714734248e"} Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.216295 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjrfv" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.216670 4833 scope.go:117] "RemoveContainer" containerID="11c67ecb15a56d729e586fc2a27c91452fc2511d378db23750f24c7ab8af476e" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.218863 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" event={"ID":"7d7f1229-1f55-416b-beeb-60a3ae0abc62","Type":"ContainerDied","Data":"01ba816664db5c627e659a4e9a605787487e909ae517f911f3528d4ded6f5b7c"} Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.218987 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zw6vx" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.233551 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z7wn9" event={"ID":"1011b353-4bd1-4087-b510-22d34e72e48b","Type":"ContainerDied","Data":"681497fc38f4d09434a20e7533c8c2c68909e4fafbf800c930da4ebe75a1e9f3"} Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.233692 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z7wn9" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.235252 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" event={"ID":"29007976-47bd-4251-8d1e-043d4c87270d","Type":"ContainerStarted","Data":"8e01aeb26133c86642edcb67ea142592e77605b970971f7e8480703ee59d1630"} Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.235294 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" event={"ID":"29007976-47bd-4251-8d1e-043d4c87270d","Type":"ContainerStarted","Data":"af394000321de21c25c37ed11f48dab2139301b7cd27b68f72c9f56cbf6a8765"} Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.236022 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.237739 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.238095 4833 scope.go:117] "RemoveContainer" containerID="6bff3fbd4564fe4bf6e2129ffa1843a7f4c0fb57da5e1c5f2c024d7f3d5077e7" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.241343 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zfz65" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.241619 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jkz78" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.242381 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jkz78" event={"ID":"3a9e548c-9edf-4dc6-83a7-4f07f6960721","Type":"ContainerDied","Data":"2f394c335a45878856fc851b2325b86d841d98f7c4d2f95e30c69b12f7c89ca5"} Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.247293 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjrfv"] Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.253229 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjrfv"] Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.273156 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-c78pj" podStartSLOduration=2.273136942 podStartE2EDuration="2.273136942s" podCreationTimestamp="2026-02-19 12:52:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:52:54.266801947 +0000 UTC m=+384.662320735" watchObservedRunningTime="2026-02-19 12:52:54.273136942 +0000 UTC m=+384.668655710" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.273931 4833 scope.go:117] "RemoveContainer" containerID="ff190f086e4f25b44277a860e4123b6dad5931284f107ebc03452500804ea7fa" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.309707 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zw6vx"] Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.316773 4833 scope.go:117] "RemoveContainer" containerID="080ef65a7973e78ff1e3f33a31141924d7ab8b832ba5d87a8bbff76905cad55d" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.325365 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01feede9-207a-499b-aee1-0fcde52463d6" path="/var/lib/kubelet/pods/01feede9-207a-499b-aee1-0fcde52463d6/volumes" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.325977 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zw6vx"] Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.332942 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jkz78"] Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.339586 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jkz78"] Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.342960 4833 scope.go:117] "RemoveContainer" containerID="7a97ba92fa433c65e4027908a37bce3478be5f99c029477d2e8ea64f942700a3" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.343252 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zfz65"] Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.348395 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zfz65"] Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.357016 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z7wn9"] Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.362025 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z7wn9"] Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.366029 4833 scope.go:117] "RemoveContainer" containerID="4bb34b83f731e629aa16d5f339abd5b3033e7855c76ce90fbfc1767a6895270f" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.381599 4833 scope.go:117] "RemoveContainer" containerID="2915a40c08252e09695fe8c7122c0e99e664bb4959bddef813e113105767af47" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.395213 4833 scope.go:117] "RemoveContainer" containerID="5416347737fb82cb710e02ab246acafbe88170e54331bb37f4de5ad2aaa68356" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.408442 4833 scope.go:117] "RemoveContainer" containerID="68dfde320717f266c58d904a10e5d429239f7e76a464c1b367eae8a817ad0316" Feb 19 12:52:54 crc kubenswrapper[4833]: I0219 12:52:54.423655 4833 scope.go:117] "RemoveContainer" containerID="8b9c14c5e785f2b02d27bb45daa3db4d076f62a883a135583aba2263d7819842" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.687724 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4vk2l"] Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.688591 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1011b353-4bd1-4087-b510-22d34e72e48b" containerName="extract-content" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.688624 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1011b353-4bd1-4087-b510-22d34e72e48b" containerName="extract-content" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.688649 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="248c3a65-f82e-475e-9d61-502028f6c2cc" containerName="extract-utilities" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.688665 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="248c3a65-f82e-475e-9d61-502028f6c2cc" containerName="extract-utilities" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.688684 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerName="extract-utilities" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.688701 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerName="extract-utilities" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.688724 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="248c3a65-f82e-475e-9d61-502028f6c2cc" containerName="extract-content" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.688742 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="248c3a65-f82e-475e-9d61-502028f6c2cc" containerName="extract-content" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.688758 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.688773 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.688800 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerName="extract-content" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.688816 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerName="extract-content" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.688840 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01feede9-207a-499b-aee1-0fcde52463d6" containerName="extract-content" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.688856 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="01feede9-207a-499b-aee1-0fcde52463d6" containerName="extract-content" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.688878 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerName="marketplace-operator" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.688896 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerName="marketplace-operator" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.688922 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01feede9-207a-499b-aee1-0fcde52463d6" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.688938 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="01feede9-207a-499b-aee1-0fcde52463d6" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.688958 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerName="marketplace-operator" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.688974 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerName="marketplace-operator" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.688996 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01feede9-207a-499b-aee1-0fcde52463d6" containerName="extract-utilities" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.689011 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="01feede9-207a-499b-aee1-0fcde52463d6" containerName="extract-utilities" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.689036 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="248c3a65-f82e-475e-9d61-502028f6c2cc" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.689051 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="248c3a65-f82e-475e-9d61-502028f6c2cc" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.689076 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1011b353-4bd1-4087-b510-22d34e72e48b" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.689093 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1011b353-4bd1-4087-b510-22d34e72e48b" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: E0219 12:52:55.689119 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1011b353-4bd1-4087-b510-22d34e72e48b" containerName="extract-utilities" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.689135 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1011b353-4bd1-4087-b510-22d34e72e48b" containerName="extract-utilities" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.689340 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="01feede9-207a-499b-aee1-0fcde52463d6" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.689379 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.689561 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerName="marketplace-operator" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.689589 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1011b353-4bd1-4087-b510-22d34e72e48b" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.689620 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="248c3a65-f82e-475e-9d61-502028f6c2cc" containerName="registry-server" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.690194 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" containerName="marketplace-operator" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.691377 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.695006 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.697879 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vk2l"] Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.823457 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f33a97-ce68-43a9-a79b-df50f34c1f96-utilities\") pod \"redhat-marketplace-4vk2l\" (UID: \"c4f33a97-ce68-43a9-a79b-df50f34c1f96\") " pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.823607 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f33a97-ce68-43a9-a79b-df50f34c1f96-catalog-content\") pod \"redhat-marketplace-4vk2l\" (UID: \"c4f33a97-ce68-43a9-a79b-df50f34c1f96\") " pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.823661 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sshn\" (UniqueName: \"kubernetes.io/projected/c4f33a97-ce68-43a9-a79b-df50f34c1f96-kube-api-access-9sshn\") pod \"redhat-marketplace-4vk2l\" (UID: \"c4f33a97-ce68-43a9-a79b-df50f34c1f96\") " pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.924651 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f33a97-ce68-43a9-a79b-df50f34c1f96-catalog-content\") pod \"redhat-marketplace-4vk2l\" (UID: \"c4f33a97-ce68-43a9-a79b-df50f34c1f96\") " pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.924734 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sshn\" (UniqueName: \"kubernetes.io/projected/c4f33a97-ce68-43a9-a79b-df50f34c1f96-kube-api-access-9sshn\") pod \"redhat-marketplace-4vk2l\" (UID: \"c4f33a97-ce68-43a9-a79b-df50f34c1f96\") " pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.924934 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f33a97-ce68-43a9-a79b-df50f34c1f96-utilities\") pod \"redhat-marketplace-4vk2l\" (UID: \"c4f33a97-ce68-43a9-a79b-df50f34c1f96\") " pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.925623 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f33a97-ce68-43a9-a79b-df50f34c1f96-catalog-content\") pod \"redhat-marketplace-4vk2l\" (UID: \"c4f33a97-ce68-43a9-a79b-df50f34c1f96\") " pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.925684 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f33a97-ce68-43a9-a79b-df50f34c1f96-utilities\") pod \"redhat-marketplace-4vk2l\" (UID: \"c4f33a97-ce68-43a9-a79b-df50f34c1f96\") " pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:52:55 crc kubenswrapper[4833]: I0219 12:52:55.945101 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sshn\" (UniqueName: \"kubernetes.io/projected/c4f33a97-ce68-43a9-a79b-df50f34c1f96-kube-api-access-9sshn\") pod \"redhat-marketplace-4vk2l\" (UID: \"c4f33a97-ce68-43a9-a79b-df50f34c1f96\") " pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.008117 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.322215 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1011b353-4bd1-4087-b510-22d34e72e48b" path="/var/lib/kubelet/pods/1011b353-4bd1-4087-b510-22d34e72e48b/volumes" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.323076 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="248c3a65-f82e-475e-9d61-502028f6c2cc" path="/var/lib/kubelet/pods/248c3a65-f82e-475e-9d61-502028f6c2cc/volumes" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.323654 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a9e548c-9edf-4dc6-83a7-4f07f6960721" path="/var/lib/kubelet/pods/3a9e548c-9edf-4dc6-83a7-4f07f6960721/volumes" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.324722 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7f1229-1f55-416b-beeb-60a3ae0abc62" path="/var/lib/kubelet/pods/7d7f1229-1f55-416b-beeb-60a3ae0abc62/volumes" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.407544 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4vk2l"] Feb 19 12:52:56 crc kubenswrapper[4833]: W0219 12:52:56.414007 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f33a97_ce68_43a9_a79b_df50f34c1f96.slice/crio-647ff78066275e728e95f35b869d9b67299e5d6cfc30c7621f231ee3849dfa7c WatchSource:0}: Error finding container 647ff78066275e728e95f35b869d9b67299e5d6cfc30c7621f231ee3849dfa7c: Status 404 returned error can't find the container with id 647ff78066275e728e95f35b869d9b67299e5d6cfc30c7621f231ee3849dfa7c Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.678468 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kxdnw"] Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.682004 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.684278 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.685834 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kxdnw"] Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.837526 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845b40fa-5ca5-47fd-bf13-3b84c9951be6-catalog-content\") pod \"redhat-operators-kxdnw\" (UID: \"845b40fa-5ca5-47fd-bf13-3b84c9951be6\") " pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.837582 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vv92\" (UniqueName: \"kubernetes.io/projected/845b40fa-5ca5-47fd-bf13-3b84c9951be6-kube-api-access-5vv92\") pod \"redhat-operators-kxdnw\" (UID: \"845b40fa-5ca5-47fd-bf13-3b84c9951be6\") " pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.837770 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845b40fa-5ca5-47fd-bf13-3b84c9951be6-utilities\") pod \"redhat-operators-kxdnw\" (UID: \"845b40fa-5ca5-47fd-bf13-3b84c9951be6\") " pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.938568 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845b40fa-5ca5-47fd-bf13-3b84c9951be6-utilities\") pod \"redhat-operators-kxdnw\" (UID: \"845b40fa-5ca5-47fd-bf13-3b84c9951be6\") " pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.938677 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845b40fa-5ca5-47fd-bf13-3b84c9951be6-catalog-content\") pod \"redhat-operators-kxdnw\" (UID: \"845b40fa-5ca5-47fd-bf13-3b84c9951be6\") " pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.938714 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vv92\" (UniqueName: \"kubernetes.io/projected/845b40fa-5ca5-47fd-bf13-3b84c9951be6-kube-api-access-5vv92\") pod \"redhat-operators-kxdnw\" (UID: \"845b40fa-5ca5-47fd-bf13-3b84c9951be6\") " pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.940583 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/845b40fa-5ca5-47fd-bf13-3b84c9951be6-utilities\") pod \"redhat-operators-kxdnw\" (UID: \"845b40fa-5ca5-47fd-bf13-3b84c9951be6\") " pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.940598 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/845b40fa-5ca5-47fd-bf13-3b84c9951be6-catalog-content\") pod \"redhat-operators-kxdnw\" (UID: \"845b40fa-5ca5-47fd-bf13-3b84c9951be6\") " pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:52:56 crc kubenswrapper[4833]: I0219 12:52:56.958345 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vv92\" (UniqueName: \"kubernetes.io/projected/845b40fa-5ca5-47fd-bf13-3b84c9951be6-kube-api-access-5vv92\") pod \"redhat-operators-kxdnw\" (UID: \"845b40fa-5ca5-47fd-bf13-3b84c9951be6\") " pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:52:57 crc kubenswrapper[4833]: I0219 12:52:57.006028 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:52:57 crc kubenswrapper[4833]: I0219 12:52:57.261291 4833 generic.go:334] "Generic (PLEG): container finished" podID="c4f33a97-ce68-43a9-a79b-df50f34c1f96" containerID="ae160f9f1ec2f6ef9e1efd7263319576b857174e1f0102a4a8013d96868fa185" exitCode=0 Feb 19 12:52:57 crc kubenswrapper[4833]: I0219 12:52:57.261381 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vk2l" event={"ID":"c4f33a97-ce68-43a9-a79b-df50f34c1f96","Type":"ContainerDied","Data":"ae160f9f1ec2f6ef9e1efd7263319576b857174e1f0102a4a8013d96868fa185"} Feb 19 12:52:57 crc kubenswrapper[4833]: I0219 12:52:57.261685 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vk2l" event={"ID":"c4f33a97-ce68-43a9-a79b-df50f34c1f96","Type":"ContainerStarted","Data":"647ff78066275e728e95f35b869d9b67299e5d6cfc30c7621f231ee3849dfa7c"} Feb 19 12:52:57 crc kubenswrapper[4833]: I0219 12:52:57.403552 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kxdnw"] Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.079628 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mkws7"] Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.081552 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.086067 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.104557 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkws7"] Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.254009 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6237e030-4362-477a-a4dc-b18cbfa467fe-catalog-content\") pod \"community-operators-mkws7\" (UID: \"6237e030-4362-477a-a4dc-b18cbfa467fe\") " pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.254194 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8cdf\" (UniqueName: \"kubernetes.io/projected/6237e030-4362-477a-a4dc-b18cbfa467fe-kube-api-access-q8cdf\") pod \"community-operators-mkws7\" (UID: \"6237e030-4362-477a-a4dc-b18cbfa467fe\") " pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.254226 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6237e030-4362-477a-a4dc-b18cbfa467fe-utilities\") pod \"community-operators-mkws7\" (UID: \"6237e030-4362-477a-a4dc-b18cbfa467fe\") " pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.267555 4833 generic.go:334] "Generic (PLEG): container finished" podID="c4f33a97-ce68-43a9-a79b-df50f34c1f96" containerID="e57c52ee63cd7abf1cd604bdb960ffd9365dbd5c735c1d1d6e43e2d978631a17" exitCode=0 Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.267630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vk2l" event={"ID":"c4f33a97-ce68-43a9-a79b-df50f34c1f96","Type":"ContainerDied","Data":"e57c52ee63cd7abf1cd604bdb960ffd9365dbd5c735c1d1d6e43e2d978631a17"} Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.269402 4833 generic.go:334] "Generic (PLEG): container finished" podID="845b40fa-5ca5-47fd-bf13-3b84c9951be6" containerID="64dc7456021f543ae22505fd177bfe065cbf0b01438bf13bb441386bb71cdca6" exitCode=0 Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.269436 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxdnw" event={"ID":"845b40fa-5ca5-47fd-bf13-3b84c9951be6","Type":"ContainerDied","Data":"64dc7456021f543ae22505fd177bfe065cbf0b01438bf13bb441386bb71cdca6"} Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.269457 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxdnw" event={"ID":"845b40fa-5ca5-47fd-bf13-3b84c9951be6","Type":"ContainerStarted","Data":"a0ca532608ddc981114b60edab529639008dd91ae8161791f32499ce3218e9ff"} Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.355213 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6237e030-4362-477a-a4dc-b18cbfa467fe-catalog-content\") pod \"community-operators-mkws7\" (UID: \"6237e030-4362-477a-a4dc-b18cbfa467fe\") " pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.355263 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8cdf\" (UniqueName: \"kubernetes.io/projected/6237e030-4362-477a-a4dc-b18cbfa467fe-kube-api-access-q8cdf\") pod \"community-operators-mkws7\" (UID: \"6237e030-4362-477a-a4dc-b18cbfa467fe\") " pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.355301 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6237e030-4362-477a-a4dc-b18cbfa467fe-utilities\") pod \"community-operators-mkws7\" (UID: \"6237e030-4362-477a-a4dc-b18cbfa467fe\") " pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.355795 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6237e030-4362-477a-a4dc-b18cbfa467fe-utilities\") pod \"community-operators-mkws7\" (UID: \"6237e030-4362-477a-a4dc-b18cbfa467fe\") " pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.356165 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6237e030-4362-477a-a4dc-b18cbfa467fe-catalog-content\") pod \"community-operators-mkws7\" (UID: \"6237e030-4362-477a-a4dc-b18cbfa467fe\") " pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.388453 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8cdf\" (UniqueName: \"kubernetes.io/projected/6237e030-4362-477a-a4dc-b18cbfa467fe-kube-api-access-q8cdf\") pod \"community-operators-mkws7\" (UID: \"6237e030-4362-477a-a4dc-b18cbfa467fe\") " pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.422077 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:52:58 crc kubenswrapper[4833]: I0219 12:52:58.636927 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkws7"] Feb 19 12:52:58 crc kubenswrapper[4833]: W0219 12:52:58.642477 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6237e030_4362_477a_a4dc_b18cbfa467fe.slice/crio-6507b56a57376f8ea603ab4c947478734c6125dd655d77291737ac33951804f8 WatchSource:0}: Error finding container 6507b56a57376f8ea603ab4c947478734c6125dd655d77291737ac33951804f8: Status 404 returned error can't find the container with id 6507b56a57376f8ea603ab4c947478734c6125dd655d77291737ac33951804f8 Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.077796 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7cc2n"] Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.079407 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.085846 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.089180 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7cc2n"] Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.167431 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr99s\" (UniqueName: \"kubernetes.io/projected/d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4-kube-api-access-kr99s\") pod \"certified-operators-7cc2n\" (UID: \"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4\") " pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.167517 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4-catalog-content\") pod \"certified-operators-7cc2n\" (UID: \"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4\") " pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.167671 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4-utilities\") pod \"certified-operators-7cc2n\" (UID: \"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4\") " pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.269328 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kr99s\" (UniqueName: \"kubernetes.io/projected/d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4-kube-api-access-kr99s\") pod \"certified-operators-7cc2n\" (UID: \"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4\") " pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.269414 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4-catalog-content\") pod \"certified-operators-7cc2n\" (UID: \"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4\") " pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.269457 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4-utilities\") pod \"certified-operators-7cc2n\" (UID: \"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4\") " pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.270107 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4-utilities\") pod \"certified-operators-7cc2n\" (UID: \"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4\") " pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.270897 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4-catalog-content\") pod \"certified-operators-7cc2n\" (UID: \"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4\") " pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.275050 4833 generic.go:334] "Generic (PLEG): container finished" podID="6237e030-4362-477a-a4dc-b18cbfa467fe" containerID="bfc3fdf62e72bb0e8a1bdd8df72dabd81488167ab1ada069de6a3ee06f12dd2c" exitCode=0 Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.275131 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkws7" event={"ID":"6237e030-4362-477a-a4dc-b18cbfa467fe","Type":"ContainerDied","Data":"bfc3fdf62e72bb0e8a1bdd8df72dabd81488167ab1ada069de6a3ee06f12dd2c"} Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.275160 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkws7" event={"ID":"6237e030-4362-477a-a4dc-b18cbfa467fe","Type":"ContainerStarted","Data":"6507b56a57376f8ea603ab4c947478734c6125dd655d77291737ac33951804f8"} Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.277060 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxdnw" event={"ID":"845b40fa-5ca5-47fd-bf13-3b84c9951be6","Type":"ContainerStarted","Data":"b721cbcce7b6fa9b67c627bb47f6894641cc57c985e74ace33338bd902225e42"} Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.279633 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4vk2l" event={"ID":"c4f33a97-ce68-43a9-a79b-df50f34c1f96","Type":"ContainerStarted","Data":"caab357f4a24831ae9db213120b54cbe105e886b96153216562fdf8cf29d369e"} Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.297761 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kr99s\" (UniqueName: \"kubernetes.io/projected/d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4-kube-api-access-kr99s\") pod \"certified-operators-7cc2n\" (UID: \"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4\") " pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.306436 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4vk2l" podStartSLOduration=2.8909599569999997 podStartE2EDuration="4.306415094s" podCreationTimestamp="2026-02-19 12:52:55 +0000 UTC" firstStartedPulling="2026-02-19 12:52:57.263400018 +0000 UTC m=+387.658918786" lastFinishedPulling="2026-02-19 12:52:58.678855155 +0000 UTC m=+389.074373923" observedRunningTime="2026-02-19 12:52:59.305244031 +0000 UTC m=+389.700762799" watchObservedRunningTime="2026-02-19 12:52:59.306415094 +0000 UTC m=+389.701933862" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.395189 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:52:59 crc kubenswrapper[4833]: I0219 12:52:59.769948 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7cc2n"] Feb 19 12:52:59 crc kubenswrapper[4833]: W0219 12:52:59.775621 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd41b2c7c_9ca2_46f4_91b0_8b7cf419d5c4.slice/crio-48b524782c7d77f7572055880c08a6544b5b69f9f4dbd51579fe711061601a69 WatchSource:0}: Error finding container 48b524782c7d77f7572055880c08a6544b5b69f9f4dbd51579fe711061601a69: Status 404 returned error can't find the container with id 48b524782c7d77f7572055880c08a6544b5b69f9f4dbd51579fe711061601a69 Feb 19 12:53:00 crc kubenswrapper[4833]: I0219 12:53:00.294473 4833 generic.go:334] "Generic (PLEG): container finished" podID="845b40fa-5ca5-47fd-bf13-3b84c9951be6" containerID="b721cbcce7b6fa9b67c627bb47f6894641cc57c985e74ace33338bd902225e42" exitCode=0 Feb 19 12:53:00 crc kubenswrapper[4833]: I0219 12:53:00.295448 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxdnw" event={"ID":"845b40fa-5ca5-47fd-bf13-3b84c9951be6","Type":"ContainerDied","Data":"b721cbcce7b6fa9b67c627bb47f6894641cc57c985e74ace33338bd902225e42"} Feb 19 12:53:00 crc kubenswrapper[4833]: I0219 12:53:00.300215 4833 generic.go:334] "Generic (PLEG): container finished" podID="6237e030-4362-477a-a4dc-b18cbfa467fe" containerID="71b6c9ba092247d8de24477400a6fb0dcc59d35321a09981806c4b4b83b1957a" exitCode=0 Feb 19 12:53:00 crc kubenswrapper[4833]: I0219 12:53:00.300273 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkws7" event={"ID":"6237e030-4362-477a-a4dc-b18cbfa467fe","Type":"ContainerDied","Data":"71b6c9ba092247d8de24477400a6fb0dcc59d35321a09981806c4b4b83b1957a"} Feb 19 12:53:00 crc kubenswrapper[4833]: I0219 12:53:00.312304 4833 generic.go:334] "Generic (PLEG): container finished" podID="d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4" containerID="3b7c450499af53d42d9d5691c060318497f9bdce20f6e07c2bb7a5b900312a09" exitCode=0 Feb 19 12:53:00 crc kubenswrapper[4833]: I0219 12:53:00.313934 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cc2n" event={"ID":"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4","Type":"ContainerDied","Data":"3b7c450499af53d42d9d5691c060318497f9bdce20f6e07c2bb7a5b900312a09"} Feb 19 12:53:00 crc kubenswrapper[4833]: I0219 12:53:00.313969 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cc2n" event={"ID":"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4","Type":"ContainerStarted","Data":"48b524782c7d77f7572055880c08a6544b5b69f9f4dbd51579fe711061601a69"} Feb 19 12:53:01 crc kubenswrapper[4833]: I0219 12:53:01.319410 4833 generic.go:334] "Generic (PLEG): container finished" podID="d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4" containerID="1076a8383a80d4aea9c28afbed7d9c3e6dd6c15e4b4626689463b0832edd10f9" exitCode=0 Feb 19 12:53:01 crc kubenswrapper[4833]: I0219 12:53:01.319514 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cc2n" event={"ID":"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4","Type":"ContainerDied","Data":"1076a8383a80d4aea9c28afbed7d9c3e6dd6c15e4b4626689463b0832edd10f9"} Feb 19 12:53:01 crc kubenswrapper[4833]: I0219 12:53:01.323520 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxdnw" event={"ID":"845b40fa-5ca5-47fd-bf13-3b84c9951be6","Type":"ContainerStarted","Data":"39002ccfcd10199b7b646c150203ef769e079ed7f49c6fd3692d81b228e91415"} Feb 19 12:53:01 crc kubenswrapper[4833]: I0219 12:53:01.325823 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkws7" event={"ID":"6237e030-4362-477a-a4dc-b18cbfa467fe","Type":"ContainerStarted","Data":"4b9144d2a12af66d25d3acad25dd08321876e7480ad6bb450573937c8ee2fe97"} Feb 19 12:53:01 crc kubenswrapper[4833]: I0219 12:53:01.355591 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kxdnw" podStartSLOduration=2.8744741769999997 podStartE2EDuration="5.355574504s" podCreationTimestamp="2026-02-19 12:52:56 +0000 UTC" firstStartedPulling="2026-02-19 12:52:58.270665271 +0000 UTC m=+388.666184039" lastFinishedPulling="2026-02-19 12:53:00.751765588 +0000 UTC m=+391.147284366" observedRunningTime="2026-02-19 12:53:01.353035749 +0000 UTC m=+391.748554527" watchObservedRunningTime="2026-02-19 12:53:01.355574504 +0000 UTC m=+391.751093272" Feb 19 12:53:01 crc kubenswrapper[4833]: I0219 12:53:01.373849 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mkws7" podStartSLOduration=1.974795292 podStartE2EDuration="3.373832464s" podCreationTimestamp="2026-02-19 12:52:58 +0000 UTC" firstStartedPulling="2026-02-19 12:52:59.276782103 +0000 UTC m=+389.672300881" lastFinishedPulling="2026-02-19 12:53:00.675819285 +0000 UTC m=+391.071338053" observedRunningTime="2026-02-19 12:53:01.370785486 +0000 UTC m=+391.766304254" watchObservedRunningTime="2026-02-19 12:53:01.373832464 +0000 UTC m=+391.769351242" Feb 19 12:53:01 crc kubenswrapper[4833]: I0219 12:53:01.415937 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7gfdx" Feb 19 12:53:01 crc kubenswrapper[4833]: I0219 12:53:01.466831 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhs8n"] Feb 19 12:53:02 crc kubenswrapper[4833]: I0219 12:53:02.331777 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7cc2n" event={"ID":"d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4","Type":"ContainerStarted","Data":"9658a7aa85ed20a1eff402dc28800aee3c8b9846b97a1fb1ebd43b022b3a50c2"} Feb 19 12:53:02 crc kubenswrapper[4833]: I0219 12:53:02.350620 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7cc2n" podStartSLOduration=1.937831528 podStartE2EDuration="3.350602723s" podCreationTimestamp="2026-02-19 12:52:59 +0000 UTC" firstStartedPulling="2026-02-19 12:53:00.320937378 +0000 UTC m=+390.716456146" lastFinishedPulling="2026-02-19 12:53:01.733708583 +0000 UTC m=+392.129227341" observedRunningTime="2026-02-19 12:53:02.347247787 +0000 UTC m=+392.742766565" watchObservedRunningTime="2026-02-19 12:53:02.350602723 +0000 UTC m=+392.746121491" Feb 19 12:53:06 crc kubenswrapper[4833]: I0219 12:53:06.008801 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:53:06 crc kubenswrapper[4833]: I0219 12:53:06.009365 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:53:06 crc kubenswrapper[4833]: I0219 12:53:06.053866 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:53:06 crc kubenswrapper[4833]: I0219 12:53:06.415722 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4vk2l" Feb 19 12:53:07 crc kubenswrapper[4833]: I0219 12:53:07.006356 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:53:07 crc kubenswrapper[4833]: I0219 12:53:07.006750 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:53:07 crc kubenswrapper[4833]: I0219 12:53:07.053706 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:53:07 crc kubenswrapper[4833]: I0219 12:53:07.398096 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kxdnw" Feb 19 12:53:08 crc kubenswrapper[4833]: I0219 12:53:08.423149 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:53:08 crc kubenswrapper[4833]: I0219 12:53:08.423213 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:53:08 crc kubenswrapper[4833]: I0219 12:53:08.463024 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:53:09 crc kubenswrapper[4833]: I0219 12:53:09.395457 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:53:09 crc kubenswrapper[4833]: I0219 12:53:09.395791 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:53:09 crc kubenswrapper[4833]: I0219 12:53:09.420047 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mkws7" Feb 19 12:53:09 crc kubenswrapper[4833]: I0219 12:53:09.444338 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:53:10 crc kubenswrapper[4833]: I0219 12:53:10.414066 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7cc2n" Feb 19 12:53:15 crc kubenswrapper[4833]: I0219 12:53:15.744922 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:53:15 crc kubenswrapper[4833]: I0219 12:53:15.746317 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:53:15 crc kubenswrapper[4833]: I0219 12:53:15.746459 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:53:15 crc kubenswrapper[4833]: I0219 12:53:15.747066 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bcc68a3815c8acc741b1eb062ad00066f331696d45bcdc1069fe57166e6a3a3c"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 12:53:15 crc kubenswrapper[4833]: I0219 12:53:15.747200 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://bcc68a3815c8acc741b1eb062ad00066f331696d45bcdc1069fe57166e6a3a3c" gracePeriod=600 Feb 19 12:53:16 crc kubenswrapper[4833]: I0219 12:53:16.402854 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="bcc68a3815c8acc741b1eb062ad00066f331696d45bcdc1069fe57166e6a3a3c" exitCode=0 Feb 19 12:53:16 crc kubenswrapper[4833]: I0219 12:53:16.402935 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"bcc68a3815c8acc741b1eb062ad00066f331696d45bcdc1069fe57166e6a3a3c"} Feb 19 12:53:16 crc kubenswrapper[4833]: I0219 12:53:16.403252 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"bd3bb06bbf28e200008c01033a1abc693e0fd5b8b730530d913d8198d32d5301"} Feb 19 12:53:16 crc kubenswrapper[4833]: I0219 12:53:16.403272 4833 scope.go:117] "RemoveContainer" containerID="26007758ef30af5f69379dad35871543f195804d478bb3bad4a3097719abec16" Feb 19 12:53:26 crc kubenswrapper[4833]: I0219 12:53:26.505283 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" podUID="dfc152d3-9326-4602-8b02-c9fbc8f73199" containerName="registry" containerID="cri-o://8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53" gracePeriod=30 Feb 19 12:53:26 crc kubenswrapper[4833]: I0219 12:53:26.974189 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.069726 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-trusted-ca\") pod \"dfc152d3-9326-4602-8b02-c9fbc8f73199\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.069875 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4v7n\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-kube-api-access-c4v7n\") pod \"dfc152d3-9326-4602-8b02-c9fbc8f73199\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.069947 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-tls\") pod \"dfc152d3-9326-4602-8b02-c9fbc8f73199\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.069996 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dfc152d3-9326-4602-8b02-c9fbc8f73199-installation-pull-secrets\") pod \"dfc152d3-9326-4602-8b02-c9fbc8f73199\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.070050 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dfc152d3-9326-4602-8b02-c9fbc8f73199-ca-trust-extracted\") pod \"dfc152d3-9326-4602-8b02-c9fbc8f73199\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.070107 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-bound-sa-token\") pod \"dfc152d3-9326-4602-8b02-c9fbc8f73199\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.070155 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-certificates\") pod \"dfc152d3-9326-4602-8b02-c9fbc8f73199\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.070434 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"dfc152d3-9326-4602-8b02-c9fbc8f73199\" (UID: \"dfc152d3-9326-4602-8b02-c9fbc8f73199\") " Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.071273 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "dfc152d3-9326-4602-8b02-c9fbc8f73199" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.071355 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "dfc152d3-9326-4602-8b02-c9fbc8f73199" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.083679 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "dfc152d3-9326-4602-8b02-c9fbc8f73199" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.083761 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "dfc152d3-9326-4602-8b02-c9fbc8f73199" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.085636 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc152d3-9326-4602-8b02-c9fbc8f73199-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "dfc152d3-9326-4602-8b02-c9fbc8f73199" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.087751 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "dfc152d3-9326-4602-8b02-c9fbc8f73199" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.092610 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-kube-api-access-c4v7n" (OuterVolumeSpecName: "kube-api-access-c4v7n") pod "dfc152d3-9326-4602-8b02-c9fbc8f73199" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199"). InnerVolumeSpecName "kube-api-access-c4v7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.118001 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfc152d3-9326-4602-8b02-c9fbc8f73199-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "dfc152d3-9326-4602-8b02-c9fbc8f73199" (UID: "dfc152d3-9326-4602-8b02-c9fbc8f73199"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.172214 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4v7n\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-kube-api-access-c4v7n\") on node \"crc\" DevicePath \"\"" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.172286 4833 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/dfc152d3-9326-4602-8b02-c9fbc8f73199-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.172305 4833 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.172323 4833 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/dfc152d3-9326-4602-8b02-c9fbc8f73199-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.172341 4833 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dfc152d3-9326-4602-8b02-c9fbc8f73199-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.172361 4833 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.172377 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dfc152d3-9326-4602-8b02-c9fbc8f73199-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.478554 4833 generic.go:334] "Generic (PLEG): container finished" podID="dfc152d3-9326-4602-8b02-c9fbc8f73199" containerID="8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53" exitCode=0 Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.478600 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.478617 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" event={"ID":"dfc152d3-9326-4602-8b02-c9fbc8f73199","Type":"ContainerDied","Data":"8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53"} Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.478667 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lhs8n" event={"ID":"dfc152d3-9326-4602-8b02-c9fbc8f73199","Type":"ContainerDied","Data":"ed61dbd6f4f033bf69943fdbb369db54c456f14b92831b3869e0b21732afedaa"} Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.478716 4833 scope.go:117] "RemoveContainer" containerID="8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.505326 4833 scope.go:117] "RemoveContainer" containerID="8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53" Feb 19 12:53:27 crc kubenswrapper[4833]: E0219 12:53:27.506085 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53\": container with ID starting with 8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53 not found: ID does not exist" containerID="8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.506302 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53"} err="failed to get container status \"8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53\": rpc error: code = NotFound desc = could not find container \"8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53\": container with ID starting with 8722c3575a814b538bcbad28e02ae26d871ba5861334497def4fbaed0e266e53 not found: ID does not exist" Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.528468 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhs8n"] Feb 19 12:53:27 crc kubenswrapper[4833]: I0219 12:53:27.537854 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lhs8n"] Feb 19 12:53:28 crc kubenswrapper[4833]: I0219 12:53:28.327634 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc152d3-9326-4602-8b02-c9fbc8f73199" path="/var/lib/kubelet/pods/dfc152d3-9326-4602-8b02-c9fbc8f73199/volumes" Feb 19 12:55:15 crc kubenswrapper[4833]: I0219 12:55:15.744454 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:55:15 crc kubenswrapper[4833]: I0219 12:55:15.745364 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:55:30 crc kubenswrapper[4833]: I0219 12:55:30.618208 4833 scope.go:117] "RemoveContainer" containerID="561730f428f1d316a01b2714dabd08ad67988dcf041a13c62de0c3c5011217be" Feb 19 12:55:45 crc kubenswrapper[4833]: I0219 12:55:45.744380 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:55:45 crc kubenswrapper[4833]: I0219 12:55:45.745203 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:56:15 crc kubenswrapper[4833]: I0219 12:56:15.744988 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:56:15 crc kubenswrapper[4833]: I0219 12:56:15.745937 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:56:15 crc kubenswrapper[4833]: I0219 12:56:15.746089 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:56:15 crc kubenswrapper[4833]: I0219 12:56:15.747835 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bd3bb06bbf28e200008c01033a1abc693e0fd5b8b730530d913d8198d32d5301"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 12:56:15 crc kubenswrapper[4833]: I0219 12:56:15.747975 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://bd3bb06bbf28e200008c01033a1abc693e0fd5b8b730530d913d8198d32d5301" gracePeriod=600 Feb 19 12:56:16 crc kubenswrapper[4833]: I0219 12:56:16.761790 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="bd3bb06bbf28e200008c01033a1abc693e0fd5b8b730530d913d8198d32d5301" exitCode=0 Feb 19 12:56:16 crc kubenswrapper[4833]: I0219 12:56:16.761910 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"bd3bb06bbf28e200008c01033a1abc693e0fd5b8b730530d913d8198d32d5301"} Feb 19 12:56:16 crc kubenswrapper[4833]: I0219 12:56:16.762623 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"cd9eac9e9427e5822654e34b25e68666ba752339a3fe6cb1abe9c3e947b8e9ba"} Feb 19 12:56:16 crc kubenswrapper[4833]: I0219 12:56:16.762659 4833 scope.go:117] "RemoveContainer" containerID="bcc68a3815c8acc741b1eb062ad00066f331696d45bcdc1069fe57166e6a3a3c" Feb 19 12:56:30 crc kubenswrapper[4833]: I0219 12:56:30.659314 4833 scope.go:117] "RemoveContainer" containerID="ceeb2b513f06542a1952246ae8af59eadc3c6aa360913d8ac2426b1955819818" Feb 19 12:56:30 crc kubenswrapper[4833]: I0219 12:56:30.690397 4833 scope.go:117] "RemoveContainer" containerID="5c3b8d4f46414704d9b70f42828bfdad4c272bbc5a5f2953730ace986e91068c" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.254160 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rpjfw"] Feb 19 12:57:52 crc kubenswrapper[4833]: E0219 12:57:52.254885 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc152d3-9326-4602-8b02-c9fbc8f73199" containerName="registry" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.254898 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc152d3-9326-4602-8b02-c9fbc8f73199" containerName="registry" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.254998 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc152d3-9326-4602-8b02-c9fbc8f73199" containerName="registry" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.255339 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rpjfw" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.258082 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.258179 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-z55ll" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.258399 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.262678 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-2zxnh"] Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.263775 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2zxnh" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.265717 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jngcn" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.278374 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t7gnv"] Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.279432 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-t7gnv" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.281129 4833 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-fnclc" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.282119 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2zxnh"] Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.306729 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t7gnv"] Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.320685 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rpjfw"] Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.349905 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbpdx\" (UniqueName: \"kubernetes.io/projected/96ec41c8-cde8-48b8-99ac-9b56a2e86761-kube-api-access-bbpdx\") pod \"cert-manager-cainjector-cf98fcc89-rpjfw\" (UID: \"96ec41c8-cde8-48b8-99ac-9b56a2e86761\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rpjfw" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.349996 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq2mk\" (UniqueName: \"kubernetes.io/projected/0ec64263-cb6a-407b-9ef7-a06af9f1df98-kube-api-access-sq2mk\") pod \"cert-manager-858654f9db-2zxnh\" (UID: \"0ec64263-cb6a-407b-9ef7-a06af9f1df98\") " pod="cert-manager/cert-manager-858654f9db-2zxnh" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.451257 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq2mk\" (UniqueName: \"kubernetes.io/projected/0ec64263-cb6a-407b-9ef7-a06af9f1df98-kube-api-access-sq2mk\") pod \"cert-manager-858654f9db-2zxnh\" (UID: \"0ec64263-cb6a-407b-9ef7-a06af9f1df98\") " pod="cert-manager/cert-manager-858654f9db-2zxnh" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.451483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbpdx\" (UniqueName: \"kubernetes.io/projected/96ec41c8-cde8-48b8-99ac-9b56a2e86761-kube-api-access-bbpdx\") pod \"cert-manager-cainjector-cf98fcc89-rpjfw\" (UID: \"96ec41c8-cde8-48b8-99ac-9b56a2e86761\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rpjfw" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.451597 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzc8f\" (UniqueName: \"kubernetes.io/projected/bba14386-87d4-4397-9ba7-beaafe4c15de-kube-api-access-nzc8f\") pod \"cert-manager-webhook-687f57d79b-t7gnv\" (UID: \"bba14386-87d4-4397-9ba7-beaafe4c15de\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t7gnv" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.470192 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq2mk\" (UniqueName: \"kubernetes.io/projected/0ec64263-cb6a-407b-9ef7-a06af9f1df98-kube-api-access-sq2mk\") pod \"cert-manager-858654f9db-2zxnh\" (UID: \"0ec64263-cb6a-407b-9ef7-a06af9f1df98\") " pod="cert-manager/cert-manager-858654f9db-2zxnh" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.470391 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbpdx\" (UniqueName: \"kubernetes.io/projected/96ec41c8-cde8-48b8-99ac-9b56a2e86761-kube-api-access-bbpdx\") pod \"cert-manager-cainjector-cf98fcc89-rpjfw\" (UID: \"96ec41c8-cde8-48b8-99ac-9b56a2e86761\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rpjfw" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.553054 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzc8f\" (UniqueName: \"kubernetes.io/projected/bba14386-87d4-4397-9ba7-beaafe4c15de-kube-api-access-nzc8f\") pod \"cert-manager-webhook-687f57d79b-t7gnv\" (UID: \"bba14386-87d4-4397-9ba7-beaafe4c15de\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t7gnv" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.570801 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzc8f\" (UniqueName: \"kubernetes.io/projected/bba14386-87d4-4397-9ba7-beaafe4c15de-kube-api-access-nzc8f\") pod \"cert-manager-webhook-687f57d79b-t7gnv\" (UID: \"bba14386-87d4-4397-9ba7-beaafe4c15de\") " pod="cert-manager/cert-manager-webhook-687f57d79b-t7gnv" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.577040 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rpjfw" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.589400 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-2zxnh" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.599094 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-t7gnv" Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.900029 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-t7gnv"] Feb 19 12:57:52 crc kubenswrapper[4833]: I0219 12:57:52.905141 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 12:57:53 crc kubenswrapper[4833]: I0219 12:57:53.067538 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-2zxnh"] Feb 19 12:57:53 crc kubenswrapper[4833]: W0219 12:57:53.075955 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ec64263_cb6a_407b_9ef7_a06af9f1df98.slice/crio-f3e61c81c65f26dd6126e15c3d6b8a2a5254d42d55e615de8a2ac9612daeb90e WatchSource:0}: Error finding container f3e61c81c65f26dd6126e15c3d6b8a2a5254d42d55e615de8a2ac9612daeb90e: Status 404 returned error can't find the container with id f3e61c81c65f26dd6126e15c3d6b8a2a5254d42d55e615de8a2ac9612daeb90e Feb 19 12:57:53 crc kubenswrapper[4833]: I0219 12:57:53.079606 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rpjfw"] Feb 19 12:57:53 crc kubenswrapper[4833]: W0219 12:57:53.088614 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96ec41c8_cde8_48b8_99ac_9b56a2e86761.slice/crio-9dc44f16c17bede4b2e33fc4ce118d8809b1c0faaff3d98c4a87c515d17fe2fe WatchSource:0}: Error finding container 9dc44f16c17bede4b2e33fc4ce118d8809b1c0faaff3d98c4a87c515d17fe2fe: Status 404 returned error can't find the container with id 9dc44f16c17bede4b2e33fc4ce118d8809b1c0faaff3d98c4a87c515d17fe2fe Feb 19 12:57:53 crc kubenswrapper[4833]: I0219 12:57:53.412395 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2zxnh" event={"ID":"0ec64263-cb6a-407b-9ef7-a06af9f1df98","Type":"ContainerStarted","Data":"f3e61c81c65f26dd6126e15c3d6b8a2a5254d42d55e615de8a2ac9612daeb90e"} Feb 19 12:57:53 crc kubenswrapper[4833]: I0219 12:57:53.413806 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-t7gnv" event={"ID":"bba14386-87d4-4397-9ba7-beaafe4c15de","Type":"ContainerStarted","Data":"c67ed5740cd0c1e39a93f3f74c8e185f97d3848a5251295f3999304faa756b7a"} Feb 19 12:57:53 crc kubenswrapper[4833]: I0219 12:57:53.415356 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rpjfw" event={"ID":"96ec41c8-cde8-48b8-99ac-9b56a2e86761","Type":"ContainerStarted","Data":"9dc44f16c17bede4b2e33fc4ce118d8809b1c0faaff3d98c4a87c515d17fe2fe"} Feb 19 12:57:57 crc kubenswrapper[4833]: I0219 12:57:57.440473 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-t7gnv" event={"ID":"bba14386-87d4-4397-9ba7-beaafe4c15de","Type":"ContainerStarted","Data":"4e98ed703a6499c62079c674906f5d5d07a7b58b9606dbf9949d02bcc48b3861"} Feb 19 12:57:57 crc kubenswrapper[4833]: I0219 12:57:57.440995 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-t7gnv" Feb 19 12:57:57 crc kubenswrapper[4833]: I0219 12:57:57.443190 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rpjfw" event={"ID":"96ec41c8-cde8-48b8-99ac-9b56a2e86761","Type":"ContainerStarted","Data":"a0d9f46dba883284f9f61a3909e4a9dee354af56a088396325e3096c0f88fa1b"} Feb 19 12:57:57 crc kubenswrapper[4833]: I0219 12:57:57.446025 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-2zxnh" event={"ID":"0ec64263-cb6a-407b-9ef7-a06af9f1df98","Type":"ContainerStarted","Data":"8a6bbddcd40a3af543688fdd93fbcee885e5df7363dca67c3cc856b223f1def5"} Feb 19 12:57:57 crc kubenswrapper[4833]: I0219 12:57:57.472409 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-t7gnv" podStartSLOduration=1.6086030789999999 podStartE2EDuration="5.472386325s" podCreationTimestamp="2026-02-19 12:57:52 +0000 UTC" firstStartedPulling="2026-02-19 12:57:52.904938633 +0000 UTC m=+683.300457401" lastFinishedPulling="2026-02-19 12:57:56.768721869 +0000 UTC m=+687.164240647" observedRunningTime="2026-02-19 12:57:57.469140829 +0000 UTC m=+687.864659607" watchObservedRunningTime="2026-02-19 12:57:57.472386325 +0000 UTC m=+687.867905103" Feb 19 12:57:57 crc kubenswrapper[4833]: I0219 12:57:57.527216 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rpjfw" podStartSLOduration=1.946793096 podStartE2EDuration="5.527195515s" podCreationTimestamp="2026-02-19 12:57:52 +0000 UTC" firstStartedPulling="2026-02-19 12:57:53.089140566 +0000 UTC m=+683.484659374" lastFinishedPulling="2026-02-19 12:57:56.669543015 +0000 UTC m=+687.065061793" observedRunningTime="2026-02-19 12:57:57.496102362 +0000 UTC m=+687.891621160" watchObservedRunningTime="2026-02-19 12:57:57.527195515 +0000 UTC m=+687.922714283" Feb 19 12:57:57 crc kubenswrapper[4833]: I0219 12:57:57.527326 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-2zxnh" podStartSLOduration=1.940864878 podStartE2EDuration="5.527320708s" podCreationTimestamp="2026-02-19 12:57:52 +0000 UTC" firstStartedPulling="2026-02-19 12:57:53.082538881 +0000 UTC m=+683.478057649" lastFinishedPulling="2026-02-19 12:57:56.668994701 +0000 UTC m=+687.064513479" observedRunningTime="2026-02-19 12:57:57.524867793 +0000 UTC m=+687.920386561" watchObservedRunningTime="2026-02-19 12:57:57.527320708 +0000 UTC m=+687.922839476" Feb 19 12:58:02 crc kubenswrapper[4833]: I0219 12:58:02.603696 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-t7gnv" Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.468329 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwqj9"] Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.469184 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovn-controller" containerID="cri-o://56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918" gracePeriod=30 Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.469244 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="northd" containerID="cri-o://327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8" gracePeriod=30 Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.469326 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovn-acl-logging" containerID="cri-o://446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d" gracePeriod=30 Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.469268 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="sbdb" containerID="cri-o://e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3" gracePeriod=30 Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.469315 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947" gracePeriod=30 Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.469349 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="kube-rbac-proxy-node" containerID="cri-o://308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b" gracePeriod=30 Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.469405 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="nbdb" containerID="cri-o://00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f" gracePeriod=30 Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.515424 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" containerID="cri-o://de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503" gracePeriod=30 Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.547751 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p75n_4e1957a0-ea7d-4831-ae8f-630a9529ece1/kube-multus/2.log" Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.548136 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p75n_4e1957a0-ea7d-4831-ae8f-630a9529ece1/kube-multus/1.log" Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.548180 4833 generic.go:334] "Generic (PLEG): container finished" podID="4e1957a0-ea7d-4831-ae8f-630a9529ece1" containerID="02377c5a8c3efb73f777f9530db44ef08fb3c60dd8af6e87a01675e92eead6f8" exitCode=2 Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.548207 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p75n" event={"ID":"4e1957a0-ea7d-4831-ae8f-630a9529ece1","Type":"ContainerDied","Data":"02377c5a8c3efb73f777f9530db44ef08fb3c60dd8af6e87a01675e92eead6f8"} Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.548239 4833 scope.go:117] "RemoveContainer" containerID="7038937048d6f77bfed5b0b521c844fe325b30f343970d1b5f654ea93f433aae" Feb 19 12:58:12 crc kubenswrapper[4833]: I0219 12:58:12.548711 4833 scope.go:117] "RemoveContainer" containerID="02377c5a8c3efb73f777f9530db44ef08fb3c60dd8af6e87a01675e92eead6f8" Feb 19 12:58:12 crc kubenswrapper[4833]: E0219 12:58:12.549022 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9p75n_openshift-multus(4e1957a0-ea7d-4831-ae8f-630a9529ece1)\"" pod="openshift-multus/multus-9p75n" podUID="4e1957a0-ea7d-4831-ae8f-630a9529ece1" Feb 19 12:58:12 crc kubenswrapper[4833]: E0219 12:58:12.853003 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3 is running failed: container process not found" containerID="e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 19 12:58:12 crc kubenswrapper[4833]: E0219 12:58:12.853813 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3 is running failed: container process not found" containerID="e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 19 12:58:12 crc kubenswrapper[4833]: E0219 12:58:12.853880 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f is running failed: container process not found" containerID="00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 19 12:58:12 crc kubenswrapper[4833]: E0219 12:58:12.854236 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3 is running failed: container process not found" containerID="e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Feb 19 12:58:12 crc kubenswrapper[4833]: E0219 12:58:12.854311 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="sbdb" Feb 19 12:58:12 crc kubenswrapper[4833]: E0219 12:58:12.854402 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f is running failed: container process not found" containerID="00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 19 12:58:12 crc kubenswrapper[4833]: E0219 12:58:12.854896 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f is running failed: container process not found" containerID="00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Feb 19 12:58:12 crc kubenswrapper[4833]: E0219 12:58:12.854928 4833 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="nbdb" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.251040 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/3.log" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.271008 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovn-acl-logging/0.log" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.272128 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovn-controller/0.log" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.273039 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.362294 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rcc86"] Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.362487 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.362509 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.362520 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.362526 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.362533 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="sbdb" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.362539 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="sbdb" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.362549 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="kube-rbac-proxy-node" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.362555 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="kube-rbac-proxy-node" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.362563 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.362568 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.362574 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="kubecfg-setup" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.362579 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="kubecfg-setup" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.362629 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="northd" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.362698 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="northd" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.362737 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.362745 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.363468 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovn-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.363557 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovn-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.363566 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovn-acl-logging" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.363572 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovn-acl-logging" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.363582 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.363588 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.363598 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="nbdb" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364492 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="nbdb" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364625 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="kube-rbac-proxy-node" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364637 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364644 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovn-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364654 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovn-acl-logging" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364662 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364668 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="sbdb" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364676 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364684 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364691 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="nbdb" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364699 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="northd" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.364839 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364847 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364972 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.364985 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerName="ovnkube-controller" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.366688 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380094 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-node-log\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380148 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-systemd\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380257 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-ovn\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380290 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-netns\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380283 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-node-log" (OuterVolumeSpecName: "node-log") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380338 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-script-lib\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380371 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-kubelet\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380371 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380410 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380416 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-ovn-kubernetes\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380472 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-netd\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380542 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-systemd-units\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380576 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-log-socket\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380465 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380603 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-bin\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380644 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chksh\" (UniqueName: \"kubernetes.io/projected/6dafae6a-984e-4e99-90ca-76937bfcc3d6-kube-api-access-chksh\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380676 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-config\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380703 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380566 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380722 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-etc-openvswitch\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380749 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-slash\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380768 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-env-overrides\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380788 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovn-node-metrics-cert\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380802 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-openvswitch\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380823 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-var-lib-openvswitch\") pod \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\" (UID: \"6dafae6a-984e-4e99-90ca-76937bfcc3d6\") " Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.381132 4833 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.381144 4833 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.381151 4833 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.381161 4833 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.381170 4833 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380632 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380671 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-log-socket" (OuterVolumeSpecName: "log-socket") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380696 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380749 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.381221 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.381250 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-slash" (OuterVolumeSpecName: "host-slash") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.380788 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.381196 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.381210 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.381616 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.381668 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.382049 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.392725 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dafae6a-984e-4e99-90ca-76937bfcc3d6-kube-api-access-chksh" (OuterVolumeSpecName: "kube-api-access-chksh") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "kube-api-access-chksh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.392822 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.397943 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6dafae6a-984e-4e99-90ca-76937bfcc3d6" (UID: "6dafae6a-984e-4e99-90ca-76937bfcc3d6"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482038 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smqsk\" (UniqueName: \"kubernetes.io/projected/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-kube-api-access-smqsk\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482101 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-run-ovn\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482129 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-run-ovn-kubernetes\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482152 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-etc-openvswitch\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482199 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-var-lib-openvswitch\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482230 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-ovnkube-config\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482252 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-systemd-units\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482271 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-node-log\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482293 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-log-socket\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482371 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-cni-bin\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482404 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-slash\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482427 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-env-overrides\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482488 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482550 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-ovn-node-metrics-cert\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482569 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-kubelet\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482586 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-run-netns\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482602 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-run-openvswitch\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482617 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-run-systemd\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482636 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-cni-netd\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482655 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-ovnkube-script-lib\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482695 4833 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482706 4833 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482715 4833 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482726 4833 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482734 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482742 4833 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482750 4833 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482759 4833 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482768 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482778 4833 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482786 4833 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482794 4833 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482801 4833 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6dafae6a-984e-4e99-90ca-76937bfcc3d6-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482810 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chksh\" (UniqueName: \"kubernetes.io/projected/6dafae6a-984e-4e99-90ca-76937bfcc3d6-kube-api-access-chksh\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.482819 4833 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6dafae6a-984e-4e99-90ca-76937bfcc3d6-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.555785 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p75n_4e1957a0-ea7d-4831-ae8f-630a9529ece1/kube-multus/2.log" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.558429 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovnkube-controller/3.log" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.560672 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovn-acl-logging/0.log" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561240 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pwqj9_6dafae6a-984e-4e99-90ca-76937bfcc3d6/ovn-controller/0.log" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561618 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503" exitCode=0 Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561656 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3" exitCode=0 Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561670 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f" exitCode=0 Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561680 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8" exitCode=0 Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561690 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947" exitCode=0 Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561697 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b" exitCode=0 Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561703 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d" exitCode=143 Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561707 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561720 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561746 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561760 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561770 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561780 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561791 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561800 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561811 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561816 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561822 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561827 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561832 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561837 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561710 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" containerID="56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918" exitCode=143 Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561850 4833 scope.go:117] "RemoveContainer" containerID="de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561842 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561879 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561906 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561929 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561942 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561954 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561965 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561976 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561987 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.561998 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562009 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562019 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562032 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562046 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562063 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562077 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562089 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562100 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562111 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562122 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562133 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562144 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562155 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562165 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562184 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pwqj9" event={"ID":"6dafae6a-984e-4e99-90ca-76937bfcc3d6","Type":"ContainerDied","Data":"6882d8b77b81dd3c529d2b3a949ec30af347fedcba9e479f8e718c01ba182186"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562205 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562222 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562237 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562252 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562266 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562280 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562296 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562309 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562323 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.562338 4833 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e"} Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.581993 4833 scope.go:117] "RemoveContainer" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583530 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-run-ovn-kubernetes\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583586 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-etc-openvswitch\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583632 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-var-lib-openvswitch\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583671 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-ovnkube-config\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583692 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-systemd-units\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583708 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-etc-openvswitch\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583719 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-node-log\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583709 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-run-ovn-kubernetes\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583778 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-var-lib-openvswitch\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583781 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-systemd-units\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583852 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-node-log\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583932 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-log-socket\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583963 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-cni-bin\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.583993 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-slash\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584022 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-env-overrides\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584033 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-log-socket\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584060 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584093 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-slash\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584114 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-ovn-node-metrics-cert\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584137 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-cni-bin\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584155 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-kubelet\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584189 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-run-netns\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584221 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-run-netns\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584238 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-run-openvswitch\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584270 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-run-systemd\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584305 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-cni-netd\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584349 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-ovnkube-script-lib\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584404 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smqsk\" (UniqueName: \"kubernetes.io/projected/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-kube-api-access-smqsk\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584444 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-run-ovn\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584486 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-ovnkube-config\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584642 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-kubelet\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-run-openvswitch\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584707 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-run-systemd\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584717 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-host-cni-netd\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.584543 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-run-ovn\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.585190 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-env-overrides\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.585424 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-ovnkube-script-lib\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.600944 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-ovn-node-metrics-cert\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.607049 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smqsk\" (UniqueName: \"kubernetes.io/projected/0c2e2a34-4345-419c-90cb-d4f43ef24dc4-kube-api-access-smqsk\") pod \"ovnkube-node-rcc86\" (UID: \"0c2e2a34-4345-419c-90cb-d4f43ef24dc4\") " pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.663375 4833 scope.go:117] "RemoveContainer" containerID="e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.672379 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwqj9"] Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.679159 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pwqj9"] Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.684732 4833 scope.go:117] "RemoveContainer" containerID="00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.694299 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.697208 4833 scope.go:117] "RemoveContainer" containerID="327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.715372 4833 scope.go:117] "RemoveContainer" containerID="fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947" Feb 19 12:58:13 crc kubenswrapper[4833]: W0219 12:58:13.736422 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c2e2a34_4345_419c_90cb_d4f43ef24dc4.slice/crio-5452c133e0fd498ec579fac1e38252d4480d8110f474fdab47b1778979ff014b WatchSource:0}: Error finding container 5452c133e0fd498ec579fac1e38252d4480d8110f474fdab47b1778979ff014b: Status 404 returned error can't find the container with id 5452c133e0fd498ec579fac1e38252d4480d8110f474fdab47b1778979ff014b Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.769334 4833 scope.go:117] "RemoveContainer" containerID="308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.792634 4833 scope.go:117] "RemoveContainer" containerID="446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.821908 4833 scope.go:117] "RemoveContainer" containerID="56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.859974 4833 scope.go:117] "RemoveContainer" containerID="6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.888337 4833 scope.go:117] "RemoveContainer" containerID="de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.888843 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503\": container with ID starting with de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503 not found: ID does not exist" containerID="de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.888879 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503"} err="failed to get container status \"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503\": rpc error: code = NotFound desc = could not find container \"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503\": container with ID starting with de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.888905 4833 scope.go:117] "RemoveContainer" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.889380 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\": container with ID starting with 96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c not found: ID does not exist" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.889407 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c"} err="failed to get container status \"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\": rpc error: code = NotFound desc = could not find container \"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\": container with ID starting with 96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.889424 4833 scope.go:117] "RemoveContainer" containerID="e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.889932 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\": container with ID starting with e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3 not found: ID does not exist" containerID="e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.889979 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3"} err="failed to get container status \"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\": rpc error: code = NotFound desc = could not find container \"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\": container with ID starting with e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.890026 4833 scope.go:117] "RemoveContainer" containerID="00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.890517 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\": container with ID starting with 00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f not found: ID does not exist" containerID="00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.890552 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f"} err="failed to get container status \"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\": rpc error: code = NotFound desc = could not find container \"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\": container with ID starting with 00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.890573 4833 scope.go:117] "RemoveContainer" containerID="327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.890928 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\": container with ID starting with 327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8 not found: ID does not exist" containerID="327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.890982 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8"} err="failed to get container status \"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\": rpc error: code = NotFound desc = could not find container \"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\": container with ID starting with 327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.890999 4833 scope.go:117] "RemoveContainer" containerID="fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.891396 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\": container with ID starting with fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947 not found: ID does not exist" containerID="fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.891431 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947"} err="failed to get container status \"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\": rpc error: code = NotFound desc = could not find container \"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\": container with ID starting with fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.891445 4833 scope.go:117] "RemoveContainer" containerID="308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.891764 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\": container with ID starting with 308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b not found: ID does not exist" containerID="308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.891799 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b"} err="failed to get container status \"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\": rpc error: code = NotFound desc = could not find container \"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\": container with ID starting with 308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.891820 4833 scope.go:117] "RemoveContainer" containerID="446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.892124 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\": container with ID starting with 446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d not found: ID does not exist" containerID="446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.892174 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d"} err="failed to get container status \"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\": rpc error: code = NotFound desc = could not find container \"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\": container with ID starting with 446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.892189 4833 scope.go:117] "RemoveContainer" containerID="56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.892517 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\": container with ID starting with 56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918 not found: ID does not exist" containerID="56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.892552 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918"} err="failed to get container status \"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\": rpc error: code = NotFound desc = could not find container \"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\": container with ID starting with 56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.892574 4833 scope.go:117] "RemoveContainer" containerID="6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e" Feb 19 12:58:13 crc kubenswrapper[4833]: E0219 12:58:13.892874 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\": container with ID starting with 6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e not found: ID does not exist" containerID="6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.892895 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e"} err="failed to get container status \"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\": rpc error: code = NotFound desc = could not find container \"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\": container with ID starting with 6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.892911 4833 scope.go:117] "RemoveContainer" containerID="de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.895303 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503"} err="failed to get container status \"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503\": rpc error: code = NotFound desc = could not find container \"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503\": container with ID starting with de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.895337 4833 scope.go:117] "RemoveContainer" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.895947 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c"} err="failed to get container status \"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\": rpc error: code = NotFound desc = could not find container \"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\": container with ID starting with 96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.896009 4833 scope.go:117] "RemoveContainer" containerID="e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.896387 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3"} err="failed to get container status \"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\": rpc error: code = NotFound desc = could not find container \"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\": container with ID starting with e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.896442 4833 scope.go:117] "RemoveContainer" containerID="00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.896729 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f"} err="failed to get container status \"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\": rpc error: code = NotFound desc = could not find container \"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\": container with ID starting with 00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.896758 4833 scope.go:117] "RemoveContainer" containerID="327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.896997 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8"} err="failed to get container status \"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\": rpc error: code = NotFound desc = could not find container \"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\": container with ID starting with 327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.897045 4833 scope.go:117] "RemoveContainer" containerID="fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.897278 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947"} err="failed to get container status \"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\": rpc error: code = NotFound desc = could not find container \"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\": container with ID starting with fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.897309 4833 scope.go:117] "RemoveContainer" containerID="308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.897578 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b"} err="failed to get container status \"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\": rpc error: code = NotFound desc = could not find container \"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\": container with ID starting with 308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.897603 4833 scope.go:117] "RemoveContainer" containerID="446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.897902 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d"} err="failed to get container status \"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\": rpc error: code = NotFound desc = could not find container \"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\": container with ID starting with 446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.897932 4833 scope.go:117] "RemoveContainer" containerID="56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.898225 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918"} err="failed to get container status \"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\": rpc error: code = NotFound desc = could not find container \"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\": container with ID starting with 56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.898256 4833 scope.go:117] "RemoveContainer" containerID="6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.898529 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e"} err="failed to get container status \"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\": rpc error: code = NotFound desc = could not find container \"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\": container with ID starting with 6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.898559 4833 scope.go:117] "RemoveContainer" containerID="de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.898856 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503"} err="failed to get container status \"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503\": rpc error: code = NotFound desc = could not find container \"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503\": container with ID starting with de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.898888 4833 scope.go:117] "RemoveContainer" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.899160 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c"} err="failed to get container status \"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\": rpc error: code = NotFound desc = could not find container \"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\": container with ID starting with 96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.899207 4833 scope.go:117] "RemoveContainer" containerID="e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.899558 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3"} err="failed to get container status \"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\": rpc error: code = NotFound desc = could not find container \"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\": container with ID starting with e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.899595 4833 scope.go:117] "RemoveContainer" containerID="00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.899828 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f"} err="failed to get container status \"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\": rpc error: code = NotFound desc = could not find container \"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\": container with ID starting with 00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.899875 4833 scope.go:117] "RemoveContainer" containerID="327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.900285 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8"} err="failed to get container status \"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\": rpc error: code = NotFound desc = could not find container \"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\": container with ID starting with 327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.900310 4833 scope.go:117] "RemoveContainer" containerID="fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.900629 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947"} err="failed to get container status \"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\": rpc error: code = NotFound desc = could not find container \"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\": container with ID starting with fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.900652 4833 scope.go:117] "RemoveContainer" containerID="308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.900976 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b"} err="failed to get container status \"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\": rpc error: code = NotFound desc = could not find container \"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\": container with ID starting with 308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.901018 4833 scope.go:117] "RemoveContainer" containerID="446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.901258 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d"} err="failed to get container status \"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\": rpc error: code = NotFound desc = could not find container \"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\": container with ID starting with 446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.901282 4833 scope.go:117] "RemoveContainer" containerID="56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.901597 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918"} err="failed to get container status \"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\": rpc error: code = NotFound desc = could not find container \"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\": container with ID starting with 56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.901642 4833 scope.go:117] "RemoveContainer" containerID="6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.901877 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e"} err="failed to get container status \"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\": rpc error: code = NotFound desc = could not find container \"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\": container with ID starting with 6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.901916 4833 scope.go:117] "RemoveContainer" containerID="de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.902169 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503"} err="failed to get container status \"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503\": rpc error: code = NotFound desc = could not find container \"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503\": container with ID starting with de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.902188 4833 scope.go:117] "RemoveContainer" containerID="96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.902427 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c"} err="failed to get container status \"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\": rpc error: code = NotFound desc = could not find container \"96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c\": container with ID starting with 96d87ca4be9786100acff503a83efa32f04c3269b9f80b632e7ae0118942073c not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.902446 4833 scope.go:117] "RemoveContainer" containerID="e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.902792 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3"} err="failed to get container status \"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\": rpc error: code = NotFound desc = could not find container \"e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3\": container with ID starting with e87cd908570fb525bc7ae54c8c0f98dc6f21acefebfc5735d3eb0670510d40c3 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.902821 4833 scope.go:117] "RemoveContainer" containerID="00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.903035 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f"} err="failed to get container status \"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\": rpc error: code = NotFound desc = could not find container \"00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f\": container with ID starting with 00775daecd45226910797281329a373ed2810039d1e74ef3e83d1eb89851fd1f not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.903065 4833 scope.go:117] "RemoveContainer" containerID="327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.903265 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8"} err="failed to get container status \"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\": rpc error: code = NotFound desc = could not find container \"327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8\": container with ID starting with 327af9c9069c5f4ca4342e6d6eff1eeb400df54d9f51bd488290b434dfdc9ce8 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.903298 4833 scope.go:117] "RemoveContainer" containerID="fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.903560 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947"} err="failed to get container status \"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\": rpc error: code = NotFound desc = could not find container \"fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947\": container with ID starting with fbb8db9b217bcf50cb63657232fc5e8cd283c5f5dd4c4463954d21bd3d060947 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.903596 4833 scope.go:117] "RemoveContainer" containerID="308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.903782 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b"} err="failed to get container status \"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\": rpc error: code = NotFound desc = could not find container \"308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b\": container with ID starting with 308cdff1385d9261b660256f4b6d0ea868ae402ba1dd68f52bbf6e362ec7896b not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.903810 4833 scope.go:117] "RemoveContainer" containerID="446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.903981 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d"} err="failed to get container status \"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\": rpc error: code = NotFound desc = could not find container \"446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d\": container with ID starting with 446d2c01258b7c7d88720f1bcebca3b71a46a882d56998990897bcc3ff21775d not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.904005 4833 scope.go:117] "RemoveContainer" containerID="56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.904237 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918"} err="failed to get container status \"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\": rpc error: code = NotFound desc = could not find container \"56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918\": container with ID starting with 56646eac399904be756fcd078646ff1de910bc19a926f36d79b51b341e5d6918 not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.904261 4833 scope.go:117] "RemoveContainer" containerID="6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.904487 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e"} err="failed to get container status \"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\": rpc error: code = NotFound desc = could not find container \"6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e\": container with ID starting with 6770871e460e4002a4e46fe66e8c498e9e4f0c63f64a6a225f5a420303d89b6e not found: ID does not exist" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.904539 4833 scope.go:117] "RemoveContainer" containerID="de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503" Feb 19 12:58:13 crc kubenswrapper[4833]: I0219 12:58:13.904830 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503"} err="failed to get container status \"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503\": rpc error: code = NotFound desc = could not find container \"de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503\": container with ID starting with de026e2bd31c9af2b05c4c3c0614fcc5514b5121ccffa44e7add2677a069c503 not found: ID does not exist" Feb 19 12:58:14 crc kubenswrapper[4833]: I0219 12:58:14.323668 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dafae6a-984e-4e99-90ca-76937bfcc3d6" path="/var/lib/kubelet/pods/6dafae6a-984e-4e99-90ca-76937bfcc3d6/volumes" Feb 19 12:58:14 crc kubenswrapper[4833]: I0219 12:58:14.571190 4833 generic.go:334] "Generic (PLEG): container finished" podID="0c2e2a34-4345-419c-90cb-d4f43ef24dc4" containerID="6fd9196805a3040fb80755deaff86f2a578f9f264d51ad5409789af09ea1b775" exitCode=0 Feb 19 12:58:14 crc kubenswrapper[4833]: I0219 12:58:14.571244 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" event={"ID":"0c2e2a34-4345-419c-90cb-d4f43ef24dc4","Type":"ContainerDied","Data":"6fd9196805a3040fb80755deaff86f2a578f9f264d51ad5409789af09ea1b775"} Feb 19 12:58:14 crc kubenswrapper[4833]: I0219 12:58:14.571278 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" event={"ID":"0c2e2a34-4345-419c-90cb-d4f43ef24dc4","Type":"ContainerStarted","Data":"5452c133e0fd498ec579fac1e38252d4480d8110f474fdab47b1778979ff014b"} Feb 19 12:58:15 crc kubenswrapper[4833]: I0219 12:58:15.582450 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" event={"ID":"0c2e2a34-4345-419c-90cb-d4f43ef24dc4","Type":"ContainerStarted","Data":"568209aec1146f5cdf6b28497729cc2f0182bd92916eba2cbaf959ab7f592e89"} Feb 19 12:58:15 crc kubenswrapper[4833]: I0219 12:58:15.582818 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" event={"ID":"0c2e2a34-4345-419c-90cb-d4f43ef24dc4","Type":"ContainerStarted","Data":"ce2a4d06b1e74f0516ade8095ba7b35d19a71b742f8ffbfae0869b206c62bdf9"} Feb 19 12:58:15 crc kubenswrapper[4833]: I0219 12:58:15.582840 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" event={"ID":"0c2e2a34-4345-419c-90cb-d4f43ef24dc4","Type":"ContainerStarted","Data":"d7038918ed02f2bf1211d26a38ec088f5f533f2d8ada131c9df1483c859011a3"} Feb 19 12:58:15 crc kubenswrapper[4833]: I0219 12:58:15.582852 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" event={"ID":"0c2e2a34-4345-419c-90cb-d4f43ef24dc4","Type":"ContainerStarted","Data":"4bb9f71e1a02b552556168ec4e6c62a5851c09adf07ab4ee7d2c2e1cbbb8ae0c"} Feb 19 12:58:15 crc kubenswrapper[4833]: I0219 12:58:15.582863 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" event={"ID":"0c2e2a34-4345-419c-90cb-d4f43ef24dc4","Type":"ContainerStarted","Data":"bf266e06e6f3bc8407ddcab402c8fc306bbb8882c9d2c9479cafcacd5aa232d3"} Feb 19 12:58:15 crc kubenswrapper[4833]: I0219 12:58:15.582874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" event={"ID":"0c2e2a34-4345-419c-90cb-d4f43ef24dc4","Type":"ContainerStarted","Data":"70559b69abb3cfce862dc436076f948436c1f620826ceb03819e904efc42f976"} Feb 19 12:58:18 crc kubenswrapper[4833]: I0219 12:58:18.610431 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" event={"ID":"0c2e2a34-4345-419c-90cb-d4f43ef24dc4","Type":"ContainerStarted","Data":"cf647adc2a2df22ba7f839dda5671506a7627a7c7be7d61bf2ee0a763be25504"} Feb 19 12:58:20 crc kubenswrapper[4833]: I0219 12:58:20.628347 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" event={"ID":"0c2e2a34-4345-419c-90cb-d4f43ef24dc4","Type":"ContainerStarted","Data":"d25edc0ee5b98790626e2a21de76602f2c1345afdf051ede6f3b4d2eac2bc67d"} Feb 19 12:58:20 crc kubenswrapper[4833]: I0219 12:58:20.628890 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:20 crc kubenswrapper[4833]: I0219 12:58:20.628904 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:20 crc kubenswrapper[4833]: I0219 12:58:20.628935 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:20 crc kubenswrapper[4833]: I0219 12:58:20.658330 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" podStartSLOduration=7.658312689 podStartE2EDuration="7.658312689s" podCreationTimestamp="2026-02-19 12:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:58:20.655874684 +0000 UTC m=+711.051393482" watchObservedRunningTime="2026-02-19 12:58:20.658312689 +0000 UTC m=+711.053831457" Feb 19 12:58:20 crc kubenswrapper[4833]: I0219 12:58:20.702192 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:20 crc kubenswrapper[4833]: I0219 12:58:20.703917 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:23 crc kubenswrapper[4833]: I0219 12:58:23.315136 4833 scope.go:117] "RemoveContainer" containerID="02377c5a8c3efb73f777f9530db44ef08fb3c60dd8af6e87a01675e92eead6f8" Feb 19 12:58:23 crc kubenswrapper[4833]: E0219 12:58:23.316732 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9p75n_openshift-multus(4e1957a0-ea7d-4831-ae8f-630a9529ece1)\"" pod="openshift-multus/multus-9p75n" podUID="4e1957a0-ea7d-4831-ae8f-630a9529ece1" Feb 19 12:58:38 crc kubenswrapper[4833]: I0219 12:58:38.314893 4833 scope.go:117] "RemoveContainer" containerID="02377c5a8c3efb73f777f9530db44ef08fb3c60dd8af6e87a01675e92eead6f8" Feb 19 12:58:38 crc kubenswrapper[4833]: I0219 12:58:38.747170 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p75n_4e1957a0-ea7d-4831-ae8f-630a9529ece1/kube-multus/2.log" Feb 19 12:58:38 crc kubenswrapper[4833]: I0219 12:58:38.747570 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p75n" event={"ID":"4e1957a0-ea7d-4831-ae8f-630a9529ece1","Type":"ContainerStarted","Data":"91e221956fba140d60c6041b2217ea50a720e984912aac7a44a1f64c88ec06a8"} Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.210701 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx"] Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.212290 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.214580 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.222680 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx"] Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.357330 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.357999 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.358056 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2kft\" (UniqueName: \"kubernetes.io/projected/a61c47e0-c103-451d-802c-fbdebf10dbd9-kube-api-access-m2kft\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.459364 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.459540 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2kft\" (UniqueName: \"kubernetes.io/projected/a61c47e0-c103-451d-802c-fbdebf10dbd9-kube-api-access-m2kft\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.459678 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.461610 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.462010 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.484102 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2kft\" (UniqueName: \"kubernetes.io/projected/a61c47e0-c103-451d-802c-fbdebf10dbd9-kube-api-access-m2kft\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.536430 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:40 crc kubenswrapper[4833]: I0219 12:58:40.806647 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx"] Feb 19 12:58:41 crc kubenswrapper[4833]: I0219 12:58:41.775580 4833 generic.go:334] "Generic (PLEG): container finished" podID="a61c47e0-c103-451d-802c-fbdebf10dbd9" containerID="91b2b362d23ec62cfe948231f17038f6a78a075574b4cc8f69e2ecb633e361dc" exitCode=0 Feb 19 12:58:41 crc kubenswrapper[4833]: I0219 12:58:41.776630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" event={"ID":"a61c47e0-c103-451d-802c-fbdebf10dbd9","Type":"ContainerDied","Data":"91b2b362d23ec62cfe948231f17038f6a78a075574b4cc8f69e2ecb633e361dc"} Feb 19 12:58:41 crc kubenswrapper[4833]: I0219 12:58:41.776692 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" event={"ID":"a61c47e0-c103-451d-802c-fbdebf10dbd9","Type":"ContainerStarted","Data":"2f97ba5d1abe5c2f349d27479067ca472fd0a72cd9766050c46069e51f680653"} Feb 19 12:58:43 crc kubenswrapper[4833]: I0219 12:58:43.732330 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rcc86" Feb 19 12:58:43 crc kubenswrapper[4833]: I0219 12:58:43.796777 4833 generic.go:334] "Generic (PLEG): container finished" podID="a61c47e0-c103-451d-802c-fbdebf10dbd9" containerID="3ad4350b61a45d268f5f4bd360dbe89e9d1ece3167e4349b87cc49ad693f7024" exitCode=0 Feb 19 12:58:43 crc kubenswrapper[4833]: I0219 12:58:43.796832 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" event={"ID":"a61c47e0-c103-451d-802c-fbdebf10dbd9","Type":"ContainerDied","Data":"3ad4350b61a45d268f5f4bd360dbe89e9d1ece3167e4349b87cc49ad693f7024"} Feb 19 12:58:44 crc kubenswrapper[4833]: I0219 12:58:44.806618 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" event={"ID":"a61c47e0-c103-451d-802c-fbdebf10dbd9","Type":"ContainerStarted","Data":"7a68d5ae82358ec04b18d2b932d966d1bd5f1f95666931de963aa5955842c439"} Feb 19 12:58:44 crc kubenswrapper[4833]: I0219 12:58:44.834774 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" podStartSLOduration=3.38384837 podStartE2EDuration="4.834747135s" podCreationTimestamp="2026-02-19 12:58:40 +0000 UTC" firstStartedPulling="2026-02-19 12:58:41.778214332 +0000 UTC m=+732.173733110" lastFinishedPulling="2026-02-19 12:58:43.229113097 +0000 UTC m=+733.624631875" observedRunningTime="2026-02-19 12:58:44.831402976 +0000 UTC m=+735.226921754" watchObservedRunningTime="2026-02-19 12:58:44.834747135 +0000 UTC m=+735.230265943" Feb 19 12:58:45 crc kubenswrapper[4833]: I0219 12:58:45.746543 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:58:45 crc kubenswrapper[4833]: I0219 12:58:45.746622 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:58:45 crc kubenswrapper[4833]: I0219 12:58:45.817672 4833 generic.go:334] "Generic (PLEG): container finished" podID="a61c47e0-c103-451d-802c-fbdebf10dbd9" containerID="7a68d5ae82358ec04b18d2b932d966d1bd5f1f95666931de963aa5955842c439" exitCode=0 Feb 19 12:58:45 crc kubenswrapper[4833]: I0219 12:58:45.817733 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" event={"ID":"a61c47e0-c103-451d-802c-fbdebf10dbd9","Type":"ContainerDied","Data":"7a68d5ae82358ec04b18d2b932d966d1bd5f1f95666931de963aa5955842c439"} Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.078234 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.178123 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-bundle\") pod \"a61c47e0-c103-451d-802c-fbdebf10dbd9\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.178448 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2kft\" (UniqueName: \"kubernetes.io/projected/a61c47e0-c103-451d-802c-fbdebf10dbd9-kube-api-access-m2kft\") pod \"a61c47e0-c103-451d-802c-fbdebf10dbd9\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.178675 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-util\") pod \"a61c47e0-c103-451d-802c-fbdebf10dbd9\" (UID: \"a61c47e0-c103-451d-802c-fbdebf10dbd9\") " Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.178684 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-bundle" (OuterVolumeSpecName: "bundle") pod "a61c47e0-c103-451d-802c-fbdebf10dbd9" (UID: "a61c47e0-c103-451d-802c-fbdebf10dbd9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.179252 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.185490 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a61c47e0-c103-451d-802c-fbdebf10dbd9-kube-api-access-m2kft" (OuterVolumeSpecName: "kube-api-access-m2kft") pod "a61c47e0-c103-451d-802c-fbdebf10dbd9" (UID: "a61c47e0-c103-451d-802c-fbdebf10dbd9"). InnerVolumeSpecName "kube-api-access-m2kft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.189690 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-util" (OuterVolumeSpecName: "util") pod "a61c47e0-c103-451d-802c-fbdebf10dbd9" (UID: "a61c47e0-c103-451d-802c-fbdebf10dbd9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.280417 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2kft\" (UniqueName: \"kubernetes.io/projected/a61c47e0-c103-451d-802c-fbdebf10dbd9-kube-api-access-m2kft\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.280687 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a61c47e0-c103-451d-802c-fbdebf10dbd9-util\") on node \"crc\" DevicePath \"\"" Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.835979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" event={"ID":"a61c47e0-c103-451d-802c-fbdebf10dbd9","Type":"ContainerDied","Data":"2f97ba5d1abe5c2f349d27479067ca472fd0a72cd9766050c46069e51f680653"} Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.836069 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f97ba5d1abe5c2f349d27479067ca472fd0a72cd9766050c46069e51f680653" Feb 19 12:58:47 crc kubenswrapper[4833]: I0219 12:58:47.836232 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx" Feb 19 12:58:48 crc kubenswrapper[4833]: I0219 12:58:48.911186 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-sqwkx"] Feb 19 12:58:48 crc kubenswrapper[4833]: E0219 12:58:48.911796 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61c47e0-c103-451d-802c-fbdebf10dbd9" containerName="pull" Feb 19 12:58:48 crc kubenswrapper[4833]: I0219 12:58:48.911812 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61c47e0-c103-451d-802c-fbdebf10dbd9" containerName="pull" Feb 19 12:58:48 crc kubenswrapper[4833]: E0219 12:58:48.911832 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61c47e0-c103-451d-802c-fbdebf10dbd9" containerName="util" Feb 19 12:58:48 crc kubenswrapper[4833]: I0219 12:58:48.911840 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61c47e0-c103-451d-802c-fbdebf10dbd9" containerName="util" Feb 19 12:58:48 crc kubenswrapper[4833]: E0219 12:58:48.911848 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a61c47e0-c103-451d-802c-fbdebf10dbd9" containerName="extract" Feb 19 12:58:48 crc kubenswrapper[4833]: I0219 12:58:48.911856 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a61c47e0-c103-451d-802c-fbdebf10dbd9" containerName="extract" Feb 19 12:58:48 crc kubenswrapper[4833]: I0219 12:58:48.911975 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a61c47e0-c103-451d-802c-fbdebf10dbd9" containerName="extract" Feb 19 12:58:48 crc kubenswrapper[4833]: I0219 12:58:48.912424 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-sqwkx" Feb 19 12:58:48 crc kubenswrapper[4833]: I0219 12:58:48.915649 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 19 12:58:48 crc kubenswrapper[4833]: I0219 12:58:48.915678 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-scj87" Feb 19 12:58:48 crc kubenswrapper[4833]: I0219 12:58:48.915718 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 19 12:58:48 crc kubenswrapper[4833]: I0219 12:58:48.926057 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-sqwkx"] Feb 19 12:58:49 crc kubenswrapper[4833]: I0219 12:58:49.003948 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvhpm\" (UniqueName: \"kubernetes.io/projected/f99e9621-3d59-431a-874e-0ecb2370cda1-kube-api-access-zvhpm\") pod \"nmstate-operator-694c9596b7-sqwkx\" (UID: \"f99e9621-3d59-431a-874e-0ecb2370cda1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-sqwkx" Feb 19 12:58:49 crc kubenswrapper[4833]: I0219 12:58:49.105801 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvhpm\" (UniqueName: \"kubernetes.io/projected/f99e9621-3d59-431a-874e-0ecb2370cda1-kube-api-access-zvhpm\") pod \"nmstate-operator-694c9596b7-sqwkx\" (UID: \"f99e9621-3d59-431a-874e-0ecb2370cda1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-sqwkx" Feb 19 12:58:49 crc kubenswrapper[4833]: I0219 12:58:49.124290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvhpm\" (UniqueName: \"kubernetes.io/projected/f99e9621-3d59-431a-874e-0ecb2370cda1-kube-api-access-zvhpm\") pod \"nmstate-operator-694c9596b7-sqwkx\" (UID: \"f99e9621-3d59-431a-874e-0ecb2370cda1\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-sqwkx" Feb 19 12:58:49 crc kubenswrapper[4833]: I0219 12:58:49.228160 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-sqwkx" Feb 19 12:58:49 crc kubenswrapper[4833]: I0219 12:58:49.429134 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-sqwkx"] Feb 19 12:58:49 crc kubenswrapper[4833]: I0219 12:58:49.845810 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-sqwkx" event={"ID":"f99e9621-3d59-431a-874e-0ecb2370cda1","Type":"ContainerStarted","Data":"2b923718b0af2b5e1e07b766b1bb5b7c257123d3e86e4c48fd92951be7e5cb1f"} Feb 19 12:58:51 crc kubenswrapper[4833]: I0219 12:58:51.863040 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-sqwkx" event={"ID":"f99e9621-3d59-431a-874e-0ecb2370cda1","Type":"ContainerStarted","Data":"f9bda114e6bf9439a58c69fe5393ac81d75f2a949f3124a84adf58ace470f990"} Feb 19 12:58:51 crc kubenswrapper[4833]: I0219 12:58:51.887243 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-sqwkx" podStartSLOduration=1.721569578 podStartE2EDuration="3.887211658s" podCreationTimestamp="2026-02-19 12:58:48 +0000 UTC" firstStartedPulling="2026-02-19 12:58:49.446809013 +0000 UTC m=+739.842327781" lastFinishedPulling="2026-02-19 12:58:51.612451063 +0000 UTC m=+742.007969861" observedRunningTime="2026-02-19 12:58:51.881417663 +0000 UTC m=+742.276936481" watchObservedRunningTime="2026-02-19 12:58:51.887211658 +0000 UTC m=+742.282730466" Feb 19 12:58:52 crc kubenswrapper[4833]: I0219 12:58:52.939318 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4szmh"] Feb 19 12:58:52 crc kubenswrapper[4833]: I0219 12:58:52.940122 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4szmh" Feb 19 12:58:52 crc kubenswrapper[4833]: I0219 12:58:52.942909 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-pfgqc" Feb 19 12:58:52 crc kubenswrapper[4833]: I0219 12:58:52.957914 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2"] Feb 19 12:58:52 crc kubenswrapper[4833]: I0219 12:58:52.959110 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" Feb 19 12:58:52 crc kubenswrapper[4833]: I0219 12:58:52.961852 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 19 12:58:52 crc kubenswrapper[4833]: I0219 12:58:52.987687 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4szmh"] Feb 19 12:58:52 crc kubenswrapper[4833]: I0219 12:58:52.996682 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-pb67x"] Feb 19 12:58:52 crc kubenswrapper[4833]: I0219 12:58:52.997732 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.021693 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2"] Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.051927 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtjb\" (UniqueName: \"kubernetes.io/projected/ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed-kube-api-access-jqtjb\") pod \"nmstate-metrics-58c85c668d-4szmh\" (UID: \"ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4szmh" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.051970 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/336bca49-9b02-4814-a710-f133cc1d3e46-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-x2ld2\" (UID: \"336bca49-9b02-4814-a710-f133cc1d3e46\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.052157 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dsgw\" (UniqueName: \"kubernetes.io/projected/336bca49-9b02-4814-a710-f133cc1d3e46-kube-api-access-5dsgw\") pod \"nmstate-webhook-866bcb46dc-x2ld2\" (UID: \"336bca49-9b02-4814-a710-f133cc1d3e46\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.071559 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct"] Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.072140 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.075703 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.075903 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.076225 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-5tt9j" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.088538 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct"] Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.153730 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/336bca49-9b02-4814-a710-f133cc1d3e46-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-x2ld2\" (UID: \"336bca49-9b02-4814-a710-f133cc1d3e46\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.153830 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc9nw\" (UniqueName: \"kubernetes.io/projected/61daa4d0-c750-45c0-83b1-99ec44ba8842-kube-api-access-kc9nw\") pod \"nmstate-console-plugin-5c78fc5d65-dd2ct\" (UID: \"61daa4d0-c750-45c0-83b1-99ec44ba8842\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.153859 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-ovs-socket\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.153982 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-dbus-socket\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.154023 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdhnv\" (UniqueName: \"kubernetes.io/projected/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-kube-api-access-bdhnv\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.154053 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-nmstate-lock\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.154176 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/61daa4d0-c750-45c0-83b1-99ec44ba8842-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dd2ct\" (UID: \"61daa4d0-c750-45c0-83b1-99ec44ba8842\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.154261 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/61daa4d0-c750-45c0-83b1-99ec44ba8842-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dd2ct\" (UID: \"61daa4d0-c750-45c0-83b1-99ec44ba8842\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.154304 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dsgw\" (UniqueName: \"kubernetes.io/projected/336bca49-9b02-4814-a710-f133cc1d3e46-kube-api-access-5dsgw\") pod \"nmstate-webhook-866bcb46dc-x2ld2\" (UID: \"336bca49-9b02-4814-a710-f133cc1d3e46\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.154344 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtjb\" (UniqueName: \"kubernetes.io/projected/ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed-kube-api-access-jqtjb\") pod \"nmstate-metrics-58c85c668d-4szmh\" (UID: \"ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4szmh" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.159811 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/336bca49-9b02-4814-a710-f133cc1d3e46-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-x2ld2\" (UID: \"336bca49-9b02-4814-a710-f133cc1d3e46\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.170278 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dsgw\" (UniqueName: \"kubernetes.io/projected/336bca49-9b02-4814-a710-f133cc1d3e46-kube-api-access-5dsgw\") pod \"nmstate-webhook-866bcb46dc-x2ld2\" (UID: \"336bca49-9b02-4814-a710-f133cc1d3e46\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.171364 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtjb\" (UniqueName: \"kubernetes.io/projected/ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed-kube-api-access-jqtjb\") pod \"nmstate-metrics-58c85c668d-4szmh\" (UID: \"ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-4szmh" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.250033 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b7df6647d-f9nk7"] Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.250838 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.255390 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-nmstate-lock\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.255445 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/61daa4d0-c750-45c0-83b1-99ec44ba8842-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dd2ct\" (UID: \"61daa4d0-c750-45c0-83b1-99ec44ba8842\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.255467 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/61daa4d0-c750-45c0-83b1-99ec44ba8842-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dd2ct\" (UID: \"61daa4d0-c750-45c0-83b1-99ec44ba8842\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.255543 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-nmstate-lock\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.255568 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc9nw\" (UniqueName: \"kubernetes.io/projected/61daa4d0-c750-45c0-83b1-99ec44ba8842-kube-api-access-kc9nw\") pod \"nmstate-console-plugin-5c78fc5d65-dd2ct\" (UID: \"61daa4d0-c750-45c0-83b1-99ec44ba8842\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.255601 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-ovs-socket\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.255652 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-dbus-socket\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.255680 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdhnv\" (UniqueName: \"kubernetes.io/projected/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-kube-api-access-bdhnv\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.255739 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-ovs-socket\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.255985 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-dbus-socket\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.256375 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4szmh" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.260184 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/61daa4d0-c750-45c0-83b1-99ec44ba8842-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-dd2ct\" (UID: \"61daa4d0-c750-45c0-83b1-99ec44ba8842\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.266815 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b7df6647d-f9nk7"] Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.273425 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/61daa4d0-c750-45c0-83b1-99ec44ba8842-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-dd2ct\" (UID: \"61daa4d0-c750-45c0-83b1-99ec44ba8842\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.275651 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.279772 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdhnv\" (UniqueName: \"kubernetes.io/projected/d5e6d19d-fb8f-4313-bd78-d5f82fa79e40-kube-api-access-bdhnv\") pod \"nmstate-handler-pb67x\" (UID: \"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40\") " pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.285941 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc9nw\" (UniqueName: \"kubernetes.io/projected/61daa4d0-c750-45c0-83b1-99ec44ba8842-kube-api-access-kc9nw\") pod \"nmstate-console-plugin-5c78fc5d65-dd2ct\" (UID: \"61daa4d0-c750-45c0-83b1-99ec44ba8842\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.312745 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:53 crc kubenswrapper[4833]: W0219 12:58:53.333216 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5e6d19d_fb8f_4313_bd78_d5f82fa79e40.slice/crio-cf6162b55168faf981bb35e23883d6a7dfe4234c91ae014eb8f5eb5963a08ecb WatchSource:0}: Error finding container cf6162b55168faf981bb35e23883d6a7dfe4234c91ae014eb8f5eb5963a08ecb: Status 404 returned error can't find the container with id cf6162b55168faf981bb35e23883d6a7dfe4234c91ae014eb8f5eb5963a08ecb Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.357245 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-oauth-serving-cert\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.357631 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-service-ca\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.357670 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11eedc81-7281-4a5e-9d16-c58d6741e075-console-serving-cert\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.357709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-console-config\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.357744 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-trusted-ca-bundle\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.357769 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11eedc81-7281-4a5e-9d16-c58d6741e075-console-oauth-config\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.357808 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvh24\" (UniqueName: \"kubernetes.io/projected/11eedc81-7281-4a5e-9d16-c58d6741e075-kube-api-access-gvh24\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.389710 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.458433 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11eedc81-7281-4a5e-9d16-c58d6741e075-console-oauth-config\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.458526 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvh24\" (UniqueName: \"kubernetes.io/projected/11eedc81-7281-4a5e-9d16-c58d6741e075-kube-api-access-gvh24\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.458586 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-service-ca\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.458610 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-oauth-serving-cert\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.458663 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11eedc81-7281-4a5e-9d16-c58d6741e075-console-serving-cert\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.458696 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-console-config\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.458723 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-trusted-ca-bundle\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.460061 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-service-ca\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.460152 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-console-config\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.460385 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-trusted-ca-bundle\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.461017 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/11eedc81-7281-4a5e-9d16-c58d6741e075-oauth-serving-cert\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.466461 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/11eedc81-7281-4a5e-9d16-c58d6741e075-console-serving-cert\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.467701 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/11eedc81-7281-4a5e-9d16-c58d6741e075-console-oauth-config\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.474646 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvh24\" (UniqueName: \"kubernetes.io/projected/11eedc81-7281-4a5e-9d16-c58d6741e075-kube-api-access-gvh24\") pod \"console-b7df6647d-f9nk7\" (UID: \"11eedc81-7281-4a5e-9d16-c58d6741e075\") " pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.485042 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-4szmh"] Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.525192 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2"] Feb 19 12:58:53 crc kubenswrapper[4833]: W0219 12:58:53.528691 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod336bca49_9b02_4814_a710_f133cc1d3e46.slice/crio-a82aab6a609c1bf78e5dc0f8d90b19de0eb70fa8fe5725cbf01da73b9cc9d058 WatchSource:0}: Error finding container a82aab6a609c1bf78e5dc0f8d90b19de0eb70fa8fe5725cbf01da73b9cc9d058: Status 404 returned error can't find the container with id a82aab6a609c1bf78e5dc0f8d90b19de0eb70fa8fe5725cbf01da73b9cc9d058 Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.594690 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct"] Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.621002 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.819312 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b7df6647d-f9nk7"] Feb 19 12:58:53 crc kubenswrapper[4833]: W0219 12:58:53.824792 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11eedc81_7281_4a5e_9d16_c58d6741e075.slice/crio-ca25f00f822b281faea7651265ab5b318163377f9fa3956742a6e38b334cefb0 WatchSource:0}: Error finding container ca25f00f822b281faea7651265ab5b318163377f9fa3956742a6e38b334cefb0: Status 404 returned error can't find the container with id ca25f00f822b281faea7651265ab5b318163377f9fa3956742a6e38b334cefb0 Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.878239 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4szmh" event={"ID":"ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed","Type":"ContainerStarted","Data":"10962155edaf7fb62825d0c19229dfbb54db748799b4dacf7b5f6542f429756a"} Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.879872 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" event={"ID":"61daa4d0-c750-45c0-83b1-99ec44ba8842","Type":"ContainerStarted","Data":"ecac3ff500b524654cfd4b6dc44a801bb46ce4dd42adc711a6cb7468d8f36e04"} Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.881199 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" event={"ID":"336bca49-9b02-4814-a710-f133cc1d3e46","Type":"ContainerStarted","Data":"a82aab6a609c1bf78e5dc0f8d90b19de0eb70fa8fe5725cbf01da73b9cc9d058"} Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.882367 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b7df6647d-f9nk7" event={"ID":"11eedc81-7281-4a5e-9d16-c58d6741e075","Type":"ContainerStarted","Data":"ca25f00f822b281faea7651265ab5b318163377f9fa3956742a6e38b334cefb0"} Feb 19 12:58:53 crc kubenswrapper[4833]: I0219 12:58:53.883615 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pb67x" event={"ID":"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40","Type":"ContainerStarted","Data":"cf6162b55168faf981bb35e23883d6a7dfe4234c91ae014eb8f5eb5963a08ecb"} Feb 19 12:58:54 crc kubenswrapper[4833]: I0219 12:58:54.892530 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b7df6647d-f9nk7" event={"ID":"11eedc81-7281-4a5e-9d16-c58d6741e075","Type":"ContainerStarted","Data":"66a34088cc67eaf50c7ec6225072ce6d777e3ea420c59e6d4caf8c27493d8809"} Feb 19 12:58:54 crc kubenswrapper[4833]: I0219 12:58:54.909244 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b7df6647d-f9nk7" podStartSLOduration=1.909224226 podStartE2EDuration="1.909224226s" podCreationTimestamp="2026-02-19 12:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 12:58:54.907989113 +0000 UTC m=+745.303507881" watchObservedRunningTime="2026-02-19 12:58:54.909224226 +0000 UTC m=+745.304742994" Feb 19 12:58:57 crc kubenswrapper[4833]: I0219 12:58:57.914645 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" event={"ID":"336bca49-9b02-4814-a710-f133cc1d3e46","Type":"ContainerStarted","Data":"5da247d7ba7be49ebc6d4300c57a6f71526cc7aee17163a75044e1a437f219c3"} Feb 19 12:58:57 crc kubenswrapper[4833]: I0219 12:58:57.915700 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" Feb 19 12:58:57 crc kubenswrapper[4833]: I0219 12:58:57.916739 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" event={"ID":"61daa4d0-c750-45c0-83b1-99ec44ba8842","Type":"ContainerStarted","Data":"cfdab15ffae1743d38120c168c90460e4fce9377be1ab2fbb49ce128e955708a"} Feb 19 12:58:57 crc kubenswrapper[4833]: I0219 12:58:57.918045 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pb67x" event={"ID":"d5e6d19d-fb8f-4313-bd78-d5f82fa79e40","Type":"ContainerStarted","Data":"24dc7d391827bd7869e3527edf13eb0a087647474d22783871e6347336908ac0"} Feb 19 12:58:57 crc kubenswrapper[4833]: I0219 12:58:57.918198 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:58:57 crc kubenswrapper[4833]: I0219 12:58:57.922691 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4szmh" event={"ID":"ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed","Type":"ContainerStarted","Data":"5f3ff32713bd2c21707612f6bc2d4013a8be94d8ca6cb1b925d0262886593219"} Feb 19 12:58:57 crc kubenswrapper[4833]: I0219 12:58:57.951212 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-dd2ct" podStartSLOduration=1.574963594 podStartE2EDuration="4.951195439s" podCreationTimestamp="2026-02-19 12:58:53 +0000 UTC" firstStartedPulling="2026-02-19 12:58:53.604638218 +0000 UTC m=+744.000156986" lastFinishedPulling="2026-02-19 12:58:56.980870033 +0000 UTC m=+747.376388831" observedRunningTime="2026-02-19 12:58:57.949719359 +0000 UTC m=+748.345238127" watchObservedRunningTime="2026-02-19 12:58:57.951195439 +0000 UTC m=+748.346714217" Feb 19 12:58:57 crc kubenswrapper[4833]: I0219 12:58:57.952085 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" podStartSLOduration=2.50363509 podStartE2EDuration="5.952076913s" podCreationTimestamp="2026-02-19 12:58:52 +0000 UTC" firstStartedPulling="2026-02-19 12:58:53.530635782 +0000 UTC m=+743.926154560" lastFinishedPulling="2026-02-19 12:58:56.979077575 +0000 UTC m=+747.374596383" observedRunningTime="2026-02-19 12:58:57.934900162 +0000 UTC m=+748.330418940" watchObservedRunningTime="2026-02-19 12:58:57.952076913 +0000 UTC m=+748.347595681" Feb 19 12:58:57 crc kubenswrapper[4833]: I0219 12:58:57.965827 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-pb67x" podStartSLOduration=2.28311768 podStartE2EDuration="5.965805471s" podCreationTimestamp="2026-02-19 12:58:52 +0000 UTC" firstStartedPulling="2026-02-19 12:58:53.334779754 +0000 UTC m=+743.730298532" lastFinishedPulling="2026-02-19 12:58:57.017467525 +0000 UTC m=+747.412986323" observedRunningTime="2026-02-19 12:58:57.963936711 +0000 UTC m=+748.359455489" watchObservedRunningTime="2026-02-19 12:58:57.965805471 +0000 UTC m=+748.361324239" Feb 19 12:58:59 crc kubenswrapper[4833]: I0219 12:58:59.941030 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4szmh" event={"ID":"ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed","Type":"ContainerStarted","Data":"94fb824748aeefb639fbe9e1f07e55737c447fb4d16eb60073061fa71a6e6f35"} Feb 19 12:58:59 crc kubenswrapper[4833]: I0219 12:58:59.966929 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-4szmh" podStartSLOduration=2.045366387 podStartE2EDuration="7.966901024s" podCreationTimestamp="2026-02-19 12:58:52 +0000 UTC" firstStartedPulling="2026-02-19 12:58:53.489840336 +0000 UTC m=+743.885359104" lastFinishedPulling="2026-02-19 12:58:59.411374973 +0000 UTC m=+749.806893741" observedRunningTime="2026-02-19 12:58:59.963627897 +0000 UTC m=+750.359146735" watchObservedRunningTime="2026-02-19 12:58:59.966901024 +0000 UTC m=+750.362419822" Feb 19 12:59:03 crc kubenswrapper[4833]: I0219 12:59:03.344848 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-pb67x" Feb 19 12:59:03 crc kubenswrapper[4833]: I0219 12:59:03.621176 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:59:03 crc kubenswrapper[4833]: I0219 12:59:03.621622 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:59:03 crc kubenswrapper[4833]: I0219 12:59:03.627631 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:59:03 crc kubenswrapper[4833]: I0219 12:59:03.900038 4833 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 12:59:03 crc kubenswrapper[4833]: I0219 12:59:03.978993 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-b7df6647d-f9nk7" Feb 19 12:59:04 crc kubenswrapper[4833]: I0219 12:59:04.046277 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zjv88"] Feb 19 12:59:13 crc kubenswrapper[4833]: I0219 12:59:13.285471 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-x2ld2" Feb 19 12:59:15 crc kubenswrapper[4833]: I0219 12:59:15.744979 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:59:15 crc kubenswrapper[4833]: I0219 12:59:15.745677 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.675017 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx"] Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.680664 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.681248 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx"] Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.683665 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.863085 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.863213 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.863308 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tgqs\" (UniqueName: \"kubernetes.io/projected/29dbbeed-6768-4f91-a9ab-ad93f33f9896-kube-api-access-6tgqs\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.964972 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.965099 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tgqs\" (UniqueName: \"kubernetes.io/projected/29dbbeed-6768-4f91-a9ab-ad93f33f9896-kube-api-access-6tgqs\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.965163 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.965779 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.965797 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:27 crc kubenswrapper[4833]: I0219 12:59:27.990329 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tgqs\" (UniqueName: \"kubernetes.io/projected/29dbbeed-6768-4f91-a9ab-ad93f33f9896-kube-api-access-6tgqs\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:28 crc kubenswrapper[4833]: I0219 12:59:28.008539 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:28 crc kubenswrapper[4833]: I0219 12:59:28.295772 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx"] Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.095958 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zjv88" podUID="8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" containerName="console" containerID="cri-o://c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4" gracePeriod=15 Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.157057 4833 generic.go:334] "Generic (PLEG): container finished" podID="29dbbeed-6768-4f91-a9ab-ad93f33f9896" containerID="4587a4cc21e483ea3b1ab8290268d6c77afda1e58c45c2f1ba7f34b3ed9302e5" exitCode=0 Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.157319 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" event={"ID":"29dbbeed-6768-4f91-a9ab-ad93f33f9896","Type":"ContainerDied","Data":"4587a4cc21e483ea3b1ab8290268d6c77afda1e58c45c2f1ba7f34b3ed9302e5"} Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.157475 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" event={"ID":"29dbbeed-6768-4f91-a9ab-ad93f33f9896","Type":"ContainerStarted","Data":"8685982e00b3bfcdd7fdffdad5dc063341e43fd0c330fdce57446304d0ce6711"} Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.510953 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zjv88_8dd5929b-7ec0-43c7-beb6-e3a3afbeec63/console/0.log" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.511013 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.602263 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz727\" (UniqueName: \"kubernetes.io/projected/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-kube-api-access-fz727\") pod \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.602348 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-serving-cert\") pod \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.602386 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-config\") pod \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.602461 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-trusted-ca-bundle\") pod \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.602545 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-service-ca\") pod \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.602570 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-oauth-config\") pod \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.602619 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-oauth-serving-cert\") pod \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\" (UID: \"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63\") " Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.603309 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-config" (OuterVolumeSpecName: "console-config") pod "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" (UID: "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.603460 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" (UID: "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.603456 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" (UID: "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.603777 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-service-ca" (OuterVolumeSpecName: "service-ca") pod "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" (UID: "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.610832 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-kube-api-access-fz727" (OuterVolumeSpecName: "kube-api-access-fz727") pod "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" (UID: "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63"). InnerVolumeSpecName "kube-api-access-fz727". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.614823 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" (UID: "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.615136 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" (UID: "8dd5929b-7ec0-43c7-beb6-e3a3afbeec63"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.705431 4833 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.705467 4833 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.705478 4833 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.705487 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz727\" (UniqueName: \"kubernetes.io/projected/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-kube-api-access-fz727\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.705512 4833 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.705519 4833 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:29 crc kubenswrapper[4833]: I0219 12:59:29.705527 4833 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.038566 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m92x6"] Feb 19 12:59:30 crc kubenswrapper[4833]: E0219 12:59:30.044319 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" containerName="console" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.044371 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" containerName="console" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.046121 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" containerName="console" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.049468 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.049848 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m92x6"] Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.112827 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-utilities\") pod \"redhat-operators-m92x6\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.112994 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-catalog-content\") pod \"redhat-operators-m92x6\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.113044 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsrl4\" (UniqueName: \"kubernetes.io/projected/a15828c8-da5a-4bb0-9999-890b2908d0d1-kube-api-access-qsrl4\") pod \"redhat-operators-m92x6\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.164435 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zjv88_8dd5929b-7ec0-43c7-beb6-e3a3afbeec63/console/0.log" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.164530 4833 generic.go:334] "Generic (PLEG): container finished" podID="8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" containerID="c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4" exitCode=2 Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.164573 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zjv88" event={"ID":"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63","Type":"ContainerDied","Data":"c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4"} Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.164583 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zjv88" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.164602 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zjv88" event={"ID":"8dd5929b-7ec0-43c7-beb6-e3a3afbeec63","Type":"ContainerDied","Data":"f3703f41ba8ea714c509bc765ff5ebde26279bad586041f16c0abda6e48d1678"} Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.164621 4833 scope.go:117] "RemoveContainer" containerID="c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.190714 4833 scope.go:117] "RemoveContainer" containerID="c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4" Feb 19 12:59:30 crc kubenswrapper[4833]: E0219 12:59:30.191785 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4\": container with ID starting with c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4 not found: ID does not exist" containerID="c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.191825 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4"} err="failed to get container status \"c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4\": rpc error: code = NotFound desc = could not find container \"c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4\": container with ID starting with c2a00d340279bef1b14315339bf392b961d165f465909ddcf721f6c95a5b76a4 not found: ID does not exist" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.204306 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zjv88"] Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.209437 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zjv88"] Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.214648 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-catalog-content\") pod \"redhat-operators-m92x6\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.214821 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsrl4\" (UniqueName: \"kubernetes.io/projected/a15828c8-da5a-4bb0-9999-890b2908d0d1-kube-api-access-qsrl4\") pod \"redhat-operators-m92x6\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.214920 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-utilities\") pod \"redhat-operators-m92x6\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.215476 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-utilities\") pod \"redhat-operators-m92x6\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.215642 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-catalog-content\") pod \"redhat-operators-m92x6\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.237897 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsrl4\" (UniqueName: \"kubernetes.io/projected/a15828c8-da5a-4bb0-9999-890b2908d0d1-kube-api-access-qsrl4\") pod \"redhat-operators-m92x6\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.322685 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dd5929b-7ec0-43c7-beb6-e3a3afbeec63" path="/var/lib/kubelet/pods/8dd5929b-7ec0-43c7-beb6-e3a3afbeec63/volumes" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.370880 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:30 crc kubenswrapper[4833]: I0219 12:59:30.896734 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m92x6"] Feb 19 12:59:30 crc kubenswrapper[4833]: W0219 12:59:30.916756 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda15828c8_da5a_4bb0_9999_890b2908d0d1.slice/crio-126ab22104531b2db7ccb018744ccdfb1f934c30a47e14e8941dd99525ec8cf8 WatchSource:0}: Error finding container 126ab22104531b2db7ccb018744ccdfb1f934c30a47e14e8941dd99525ec8cf8: Status 404 returned error can't find the container with id 126ab22104531b2db7ccb018744ccdfb1f934c30a47e14e8941dd99525ec8cf8 Feb 19 12:59:31 crc kubenswrapper[4833]: I0219 12:59:31.172406 4833 generic.go:334] "Generic (PLEG): container finished" podID="29dbbeed-6768-4f91-a9ab-ad93f33f9896" containerID="e1a11bdf8882f1133b60e6091ab45cd3c4a1d08d01bdfff5e28de5fefdcdc73f" exitCode=0 Feb 19 12:59:31 crc kubenswrapper[4833]: I0219 12:59:31.172535 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" event={"ID":"29dbbeed-6768-4f91-a9ab-ad93f33f9896","Type":"ContainerDied","Data":"e1a11bdf8882f1133b60e6091ab45cd3c4a1d08d01bdfff5e28de5fefdcdc73f"} Feb 19 12:59:31 crc kubenswrapper[4833]: I0219 12:59:31.174220 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92x6" event={"ID":"a15828c8-da5a-4bb0-9999-890b2908d0d1","Type":"ContainerStarted","Data":"126ab22104531b2db7ccb018744ccdfb1f934c30a47e14e8941dd99525ec8cf8"} Feb 19 12:59:32 crc kubenswrapper[4833]: I0219 12:59:32.183669 4833 generic.go:334] "Generic (PLEG): container finished" podID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerID="f0eb4bed7afaca2daed471a62678e60224d8f26b33a76c7159423a8fdc5ad6c9" exitCode=0 Feb 19 12:59:32 crc kubenswrapper[4833]: I0219 12:59:32.183720 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92x6" event={"ID":"a15828c8-da5a-4bb0-9999-890b2908d0d1","Type":"ContainerDied","Data":"f0eb4bed7afaca2daed471a62678e60224d8f26b33a76c7159423a8fdc5ad6c9"} Feb 19 12:59:32 crc kubenswrapper[4833]: I0219 12:59:32.186753 4833 generic.go:334] "Generic (PLEG): container finished" podID="29dbbeed-6768-4f91-a9ab-ad93f33f9896" containerID="3681f958e2b0196a2a0d2df3d6c77ecb55810003d4a85255b45c5a206a2621ca" exitCode=0 Feb 19 12:59:32 crc kubenswrapper[4833]: I0219 12:59:32.186786 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" event={"ID":"29dbbeed-6768-4f91-a9ab-ad93f33f9896","Type":"ContainerDied","Data":"3681f958e2b0196a2a0d2df3d6c77ecb55810003d4a85255b45c5a206a2621ca"} Feb 19 12:59:33 crc kubenswrapper[4833]: I0219 12:59:33.197977 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92x6" event={"ID":"a15828c8-da5a-4bb0-9999-890b2908d0d1","Type":"ContainerStarted","Data":"24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f"} Feb 19 12:59:33 crc kubenswrapper[4833]: I0219 12:59:33.489599 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:33 crc kubenswrapper[4833]: I0219 12:59:33.565463 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-util\") pod \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " Feb 19 12:59:33 crc kubenswrapper[4833]: I0219 12:59:33.565596 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-bundle\") pod \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " Feb 19 12:59:33 crc kubenswrapper[4833]: I0219 12:59:33.567472 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-bundle" (OuterVolumeSpecName: "bundle") pod "29dbbeed-6768-4f91-a9ab-ad93f33f9896" (UID: "29dbbeed-6768-4f91-a9ab-ad93f33f9896"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:59:33 crc kubenswrapper[4833]: I0219 12:59:33.567627 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tgqs\" (UniqueName: \"kubernetes.io/projected/29dbbeed-6768-4f91-a9ab-ad93f33f9896-kube-api-access-6tgqs\") pod \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\" (UID: \"29dbbeed-6768-4f91-a9ab-ad93f33f9896\") " Feb 19 12:59:33 crc kubenswrapper[4833]: I0219 12:59:33.569011 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:33 crc kubenswrapper[4833]: I0219 12:59:33.573056 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29dbbeed-6768-4f91-a9ab-ad93f33f9896-kube-api-access-6tgqs" (OuterVolumeSpecName: "kube-api-access-6tgqs") pod "29dbbeed-6768-4f91-a9ab-ad93f33f9896" (UID: "29dbbeed-6768-4f91-a9ab-ad93f33f9896"). InnerVolumeSpecName "kube-api-access-6tgqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:59:33 crc kubenswrapper[4833]: I0219 12:59:33.598241 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-util" (OuterVolumeSpecName: "util") pod "29dbbeed-6768-4f91-a9ab-ad93f33f9896" (UID: "29dbbeed-6768-4f91-a9ab-ad93f33f9896"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:59:33 crc kubenswrapper[4833]: I0219 12:59:33.670302 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29dbbeed-6768-4f91-a9ab-ad93f33f9896-util\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:33 crc kubenswrapper[4833]: I0219 12:59:33.670348 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tgqs\" (UniqueName: \"kubernetes.io/projected/29dbbeed-6768-4f91-a9ab-ad93f33f9896-kube-api-access-6tgqs\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:34 crc kubenswrapper[4833]: I0219 12:59:34.209583 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" Feb 19 12:59:34 crc kubenswrapper[4833]: I0219 12:59:34.209604 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx" event={"ID":"29dbbeed-6768-4f91-a9ab-ad93f33f9896","Type":"ContainerDied","Data":"8685982e00b3bfcdd7fdffdad5dc063341e43fd0c330fdce57446304d0ce6711"} Feb 19 12:59:34 crc kubenswrapper[4833]: I0219 12:59:34.210162 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8685982e00b3bfcdd7fdffdad5dc063341e43fd0c330fdce57446304d0ce6711" Feb 19 12:59:34 crc kubenswrapper[4833]: I0219 12:59:34.213003 4833 generic.go:334] "Generic (PLEG): container finished" podID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerID="24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f" exitCode=0 Feb 19 12:59:34 crc kubenswrapper[4833]: I0219 12:59:34.213071 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92x6" event={"ID":"a15828c8-da5a-4bb0-9999-890b2908d0d1","Type":"ContainerDied","Data":"24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f"} Feb 19 12:59:36 crc kubenswrapper[4833]: I0219 12:59:36.231468 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92x6" event={"ID":"a15828c8-da5a-4bb0-9999-890b2908d0d1","Type":"ContainerStarted","Data":"0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f"} Feb 19 12:59:36 crc kubenswrapper[4833]: I0219 12:59:36.248519 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m92x6" podStartSLOduration=2.71725218 podStartE2EDuration="6.248488138s" podCreationTimestamp="2026-02-19 12:59:30 +0000 UTC" firstStartedPulling="2026-02-19 12:59:32.186960552 +0000 UTC m=+782.582479360" lastFinishedPulling="2026-02-19 12:59:35.71819652 +0000 UTC m=+786.113715318" observedRunningTime="2026-02-19 12:59:36.246930837 +0000 UTC m=+786.642449615" watchObservedRunningTime="2026-02-19 12:59:36.248488138 +0000 UTC m=+786.644006916" Feb 19 12:59:40 crc kubenswrapper[4833]: I0219 12:59:40.371305 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:40 crc kubenswrapper[4833]: I0219 12:59:40.372795 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:41 crc kubenswrapper[4833]: I0219 12:59:41.415609 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m92x6" podUID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerName="registry-server" probeResult="failure" output=< Feb 19 12:59:41 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 19 12:59:41 crc kubenswrapper[4833]: > Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.170710 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d"] Feb 19 12:59:43 crc kubenswrapper[4833]: E0219 12:59:43.171091 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29dbbeed-6768-4f91-a9ab-ad93f33f9896" containerName="pull" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.171103 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="29dbbeed-6768-4f91-a9ab-ad93f33f9896" containerName="pull" Feb 19 12:59:43 crc kubenswrapper[4833]: E0219 12:59:43.171123 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29dbbeed-6768-4f91-a9ab-ad93f33f9896" containerName="extract" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.171130 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="29dbbeed-6768-4f91-a9ab-ad93f33f9896" containerName="extract" Feb 19 12:59:43 crc kubenswrapper[4833]: E0219 12:59:43.171137 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29dbbeed-6768-4f91-a9ab-ad93f33f9896" containerName="util" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.171142 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="29dbbeed-6768-4f91-a9ab-ad93f33f9896" containerName="util" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.171313 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="29dbbeed-6768-4f91-a9ab-ad93f33f9896" containerName="extract" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.172073 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.176882 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.176895 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-rqsvj" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.177106 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.177202 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.177377 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.187629 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84aafe4e-e69c-4cdb-8987-71eb568e3c6b-apiservice-cert\") pod \"metallb-operator-controller-manager-595bc44cf4-flp9d\" (UID: \"84aafe4e-e69c-4cdb-8987-71eb568e3c6b\") " pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.187671 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jhbt\" (UniqueName: \"kubernetes.io/projected/84aafe4e-e69c-4cdb-8987-71eb568e3c6b-kube-api-access-4jhbt\") pod \"metallb-operator-controller-manager-595bc44cf4-flp9d\" (UID: \"84aafe4e-e69c-4cdb-8987-71eb568e3c6b\") " pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.187708 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84aafe4e-e69c-4cdb-8987-71eb568e3c6b-webhook-cert\") pod \"metallb-operator-controller-manager-595bc44cf4-flp9d\" (UID: \"84aafe4e-e69c-4cdb-8987-71eb568e3c6b\") " pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.199620 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d"] Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.288432 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84aafe4e-e69c-4cdb-8987-71eb568e3c6b-apiservice-cert\") pod \"metallb-operator-controller-manager-595bc44cf4-flp9d\" (UID: \"84aafe4e-e69c-4cdb-8987-71eb568e3c6b\") " pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.288474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jhbt\" (UniqueName: \"kubernetes.io/projected/84aafe4e-e69c-4cdb-8987-71eb568e3c6b-kube-api-access-4jhbt\") pod \"metallb-operator-controller-manager-595bc44cf4-flp9d\" (UID: \"84aafe4e-e69c-4cdb-8987-71eb568e3c6b\") " pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.288594 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84aafe4e-e69c-4cdb-8987-71eb568e3c6b-webhook-cert\") pod \"metallb-operator-controller-manager-595bc44cf4-flp9d\" (UID: \"84aafe4e-e69c-4cdb-8987-71eb568e3c6b\") " pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.294063 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/84aafe4e-e69c-4cdb-8987-71eb568e3c6b-webhook-cert\") pod \"metallb-operator-controller-manager-595bc44cf4-flp9d\" (UID: \"84aafe4e-e69c-4cdb-8987-71eb568e3c6b\") " pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.297041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/84aafe4e-e69c-4cdb-8987-71eb568e3c6b-apiservice-cert\") pod \"metallb-operator-controller-manager-595bc44cf4-flp9d\" (UID: \"84aafe4e-e69c-4cdb-8987-71eb568e3c6b\") " pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.309036 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jhbt\" (UniqueName: \"kubernetes.io/projected/84aafe4e-e69c-4cdb-8987-71eb568e3c6b-kube-api-access-4jhbt\") pod \"metallb-operator-controller-manager-595bc44cf4-flp9d\" (UID: \"84aafe4e-e69c-4cdb-8987-71eb568e3c6b\") " pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.425430 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k"] Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.426102 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.428249 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.428277 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.428702 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-xpcm4" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.486334 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k"] Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.490862 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89d5e852-a20e-4eb4-a37a-6ecdbaf05484-apiservice-cert\") pod \"metallb-operator-webhook-server-55ff9b8c6-prv8k\" (UID: \"89d5e852-a20e-4eb4-a37a-6ecdbaf05484\") " pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.490920 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p24hn\" (UniqueName: \"kubernetes.io/projected/89d5e852-a20e-4eb4-a37a-6ecdbaf05484-kube-api-access-p24hn\") pod \"metallb-operator-webhook-server-55ff9b8c6-prv8k\" (UID: \"89d5e852-a20e-4eb4-a37a-6ecdbaf05484\") " pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.491053 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89d5e852-a20e-4eb4-a37a-6ecdbaf05484-webhook-cert\") pod \"metallb-operator-webhook-server-55ff9b8c6-prv8k\" (UID: \"89d5e852-a20e-4eb4-a37a-6ecdbaf05484\") " pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.506420 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.594758 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p24hn\" (UniqueName: \"kubernetes.io/projected/89d5e852-a20e-4eb4-a37a-6ecdbaf05484-kube-api-access-p24hn\") pod \"metallb-operator-webhook-server-55ff9b8c6-prv8k\" (UID: \"89d5e852-a20e-4eb4-a37a-6ecdbaf05484\") " pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.594848 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89d5e852-a20e-4eb4-a37a-6ecdbaf05484-webhook-cert\") pod \"metallb-operator-webhook-server-55ff9b8c6-prv8k\" (UID: \"89d5e852-a20e-4eb4-a37a-6ecdbaf05484\") " pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.594891 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89d5e852-a20e-4eb4-a37a-6ecdbaf05484-apiservice-cert\") pod \"metallb-operator-webhook-server-55ff9b8c6-prv8k\" (UID: \"89d5e852-a20e-4eb4-a37a-6ecdbaf05484\") " pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.600776 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89d5e852-a20e-4eb4-a37a-6ecdbaf05484-apiservice-cert\") pod \"metallb-operator-webhook-server-55ff9b8c6-prv8k\" (UID: \"89d5e852-a20e-4eb4-a37a-6ecdbaf05484\") " pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.600859 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89d5e852-a20e-4eb4-a37a-6ecdbaf05484-webhook-cert\") pod \"metallb-operator-webhook-server-55ff9b8c6-prv8k\" (UID: \"89d5e852-a20e-4eb4-a37a-6ecdbaf05484\") " pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.618996 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p24hn\" (UniqueName: \"kubernetes.io/projected/89d5e852-a20e-4eb4-a37a-6ecdbaf05484-kube-api-access-p24hn\") pod \"metallb-operator-webhook-server-55ff9b8c6-prv8k\" (UID: \"89d5e852-a20e-4eb4-a37a-6ecdbaf05484\") " pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.740207 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.756869 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d"] Feb 19 12:59:43 crc kubenswrapper[4833]: W0219 12:59:43.764726 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84aafe4e_e69c_4cdb_8987_71eb568e3c6b.slice/crio-fa2a0497e9596fe2bc651b467c744d4e7fe0435e80508f7cf010891419bf09ae WatchSource:0}: Error finding container fa2a0497e9596fe2bc651b467c744d4e7fe0435e80508f7cf010891419bf09ae: Status 404 returned error can't find the container with id fa2a0497e9596fe2bc651b467c744d4e7fe0435e80508f7cf010891419bf09ae Feb 19 12:59:43 crc kubenswrapper[4833]: I0219 12:59:43.934913 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k"] Feb 19 12:59:43 crc kubenswrapper[4833]: W0219 12:59:43.939914 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89d5e852_a20e_4eb4_a37a_6ecdbaf05484.slice/crio-bedae76f306486b52328b3634278adca196649499616e97bc1ef2f404cebbeed WatchSource:0}: Error finding container bedae76f306486b52328b3634278adca196649499616e97bc1ef2f404cebbeed: Status 404 returned error can't find the container with id bedae76f306486b52328b3634278adca196649499616e97bc1ef2f404cebbeed Feb 19 12:59:44 crc kubenswrapper[4833]: I0219 12:59:44.277594 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" event={"ID":"89d5e852-a20e-4eb4-a37a-6ecdbaf05484","Type":"ContainerStarted","Data":"bedae76f306486b52328b3634278adca196649499616e97bc1ef2f404cebbeed"} Feb 19 12:59:44 crc kubenswrapper[4833]: I0219 12:59:44.278739 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" event={"ID":"84aafe4e-e69c-4cdb-8987-71eb568e3c6b","Type":"ContainerStarted","Data":"fa2a0497e9596fe2bc651b467c744d4e7fe0435e80508f7cf010891419bf09ae"} Feb 19 12:59:45 crc kubenswrapper[4833]: I0219 12:59:45.744476 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 12:59:45 crc kubenswrapper[4833]: I0219 12:59:45.744574 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 12:59:45 crc kubenswrapper[4833]: I0219 12:59:45.744623 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 12:59:45 crc kubenswrapper[4833]: I0219 12:59:45.745269 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd9eac9e9427e5822654e34b25e68666ba752339a3fe6cb1abe9c3e947b8e9ba"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 12:59:45 crc kubenswrapper[4833]: I0219 12:59:45.745335 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://cd9eac9e9427e5822654e34b25e68666ba752339a3fe6cb1abe9c3e947b8e9ba" gracePeriod=600 Feb 19 12:59:46 crc kubenswrapper[4833]: I0219 12:59:46.295063 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="cd9eac9e9427e5822654e34b25e68666ba752339a3fe6cb1abe9c3e947b8e9ba" exitCode=0 Feb 19 12:59:46 crc kubenswrapper[4833]: I0219 12:59:46.295163 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"cd9eac9e9427e5822654e34b25e68666ba752339a3fe6cb1abe9c3e947b8e9ba"} Feb 19 12:59:46 crc kubenswrapper[4833]: I0219 12:59:46.295415 4833 scope.go:117] "RemoveContainer" containerID="bd3bb06bbf28e200008c01033a1abc693e0fd5b8b730530d913d8198d32d5301" Feb 19 12:59:47 crc kubenswrapper[4833]: I0219 12:59:47.321070 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"44979c86cbc1a1a08268bf3eace13600a4809b3fa1a8321a545736d1f5619e6f"} Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.337755 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" event={"ID":"89d5e852-a20e-4eb4-a37a-6ecdbaf05484","Type":"ContainerStarted","Data":"a0373c27163a4a59c5c4715ae164e59ae7965e4bd3bf3d5f64d4dbd99b9c6c21"} Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.338090 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.339967 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" event={"ID":"84aafe4e-e69c-4cdb-8987-71eb568e3c6b","Type":"ContainerStarted","Data":"307cb143aeb9a733467388cf537d6e147223c90e468f726f1b53a7cc7ff9e974"} Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.340262 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.411518 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" podStartSLOduration=1.949682545 podStartE2EDuration="7.411485032s" podCreationTimestamp="2026-02-19 12:59:43 +0000 UTC" firstStartedPulling="2026-02-19 12:59:43.766545742 +0000 UTC m=+794.162064510" lastFinishedPulling="2026-02-19 12:59:49.228348229 +0000 UTC m=+799.623866997" observedRunningTime="2026-02-19 12:59:50.409655904 +0000 UTC m=+800.805174682" watchObservedRunningTime="2026-02-19 12:59:50.411485032 +0000 UTC m=+800.807003800" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.428200 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.436626 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" podStartSLOduration=2.112544308 podStartE2EDuration="7.436603776s" podCreationTimestamp="2026-02-19 12:59:43 +0000 UTC" firstStartedPulling="2026-02-19 12:59:43.943138807 +0000 UTC m=+794.338657575" lastFinishedPulling="2026-02-19 12:59:49.267198265 +0000 UTC m=+799.662717043" observedRunningTime="2026-02-19 12:59:50.435214019 +0000 UTC m=+800.830732797" watchObservedRunningTime="2026-02-19 12:59:50.436603776 +0000 UTC m=+800.832122554" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.483531 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.632594 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fhl7l"] Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.634852 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhl7l" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.657705 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhl7l"] Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.695466 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-catalog-content\") pod \"community-operators-fhl7l\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " pod="openshift-marketplace/community-operators-fhl7l" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.696119 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pdr8\" (UniqueName: \"kubernetes.io/projected/88f49eed-0202-449a-a76e-23a3b8d0de12-kube-api-access-2pdr8\") pod \"community-operators-fhl7l\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " pod="openshift-marketplace/community-operators-fhl7l" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.696291 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-utilities\") pod \"community-operators-fhl7l\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " pod="openshift-marketplace/community-operators-fhl7l" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.798116 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-utilities\") pod \"community-operators-fhl7l\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " pod="openshift-marketplace/community-operators-fhl7l" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.798431 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-catalog-content\") pod \"community-operators-fhl7l\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " pod="openshift-marketplace/community-operators-fhl7l" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.798650 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pdr8\" (UniqueName: \"kubernetes.io/projected/88f49eed-0202-449a-a76e-23a3b8d0de12-kube-api-access-2pdr8\") pod \"community-operators-fhl7l\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " pod="openshift-marketplace/community-operators-fhl7l" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.799603 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-utilities\") pod \"community-operators-fhl7l\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " pod="openshift-marketplace/community-operators-fhl7l" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.799990 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-catalog-content\") pod \"community-operators-fhl7l\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " pod="openshift-marketplace/community-operators-fhl7l" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.828538 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pdr8\" (UniqueName: \"kubernetes.io/projected/88f49eed-0202-449a-a76e-23a3b8d0de12-kube-api-access-2pdr8\") pod \"community-operators-fhl7l\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " pod="openshift-marketplace/community-operators-fhl7l" Feb 19 12:59:50 crc kubenswrapper[4833]: I0219 12:59:50.966181 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhl7l" Feb 19 12:59:51 crc kubenswrapper[4833]: I0219 12:59:51.265151 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fhl7l"] Feb 19 12:59:51 crc kubenswrapper[4833]: W0219 12:59:51.275734 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88f49eed_0202_449a_a76e_23a3b8d0de12.slice/crio-cef7a94590f1b239548545a7ef6c908a7ce6adb1e77526dbb2ff2c9cacec6818 WatchSource:0}: Error finding container cef7a94590f1b239548545a7ef6c908a7ce6adb1e77526dbb2ff2c9cacec6818: Status 404 returned error can't find the container with id cef7a94590f1b239548545a7ef6c908a7ce6adb1e77526dbb2ff2c9cacec6818 Feb 19 12:59:51 crc kubenswrapper[4833]: I0219 12:59:51.347884 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhl7l" event={"ID":"88f49eed-0202-449a-a76e-23a3b8d0de12","Type":"ContainerStarted","Data":"cef7a94590f1b239548545a7ef6c908a7ce6adb1e77526dbb2ff2c9cacec6818"} Feb 19 12:59:52 crc kubenswrapper[4833]: I0219 12:59:52.353851 4833 generic.go:334] "Generic (PLEG): container finished" podID="88f49eed-0202-449a-a76e-23a3b8d0de12" containerID="bb9cb947e409e4441611bed8c87b20cb179c0a8f958492f7b489ffe9ed44159a" exitCode=0 Feb 19 12:59:52 crc kubenswrapper[4833]: I0219 12:59:52.353899 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhl7l" event={"ID":"88f49eed-0202-449a-a76e-23a3b8d0de12","Type":"ContainerDied","Data":"bb9cb947e409e4441611bed8c87b20cb179c0a8f958492f7b489ffe9ed44159a"} Feb 19 12:59:53 crc kubenswrapper[4833]: I0219 12:59:53.359758 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhl7l" event={"ID":"88f49eed-0202-449a-a76e-23a3b8d0de12","Type":"ContainerStarted","Data":"b32e0e4e477a89289c6361ae69f1060bf188eb6b73d785b755015fdeb2e597a7"} Feb 19 12:59:53 crc kubenswrapper[4833]: I0219 12:59:53.820272 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m92x6"] Feb 19 12:59:53 crc kubenswrapper[4833]: I0219 12:59:53.820670 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m92x6" podUID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerName="registry-server" containerID="cri-o://0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f" gracePeriod=2 Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.184993 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.260163 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-catalog-content\") pod \"a15828c8-da5a-4bb0-9999-890b2908d0d1\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.260276 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsrl4\" (UniqueName: \"kubernetes.io/projected/a15828c8-da5a-4bb0-9999-890b2908d0d1-kube-api-access-qsrl4\") pod \"a15828c8-da5a-4bb0-9999-890b2908d0d1\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.260436 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-utilities\") pod \"a15828c8-da5a-4bb0-9999-890b2908d0d1\" (UID: \"a15828c8-da5a-4bb0-9999-890b2908d0d1\") " Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.261521 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-utilities" (OuterVolumeSpecName: "utilities") pod "a15828c8-da5a-4bb0-9999-890b2908d0d1" (UID: "a15828c8-da5a-4bb0-9999-890b2908d0d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.271634 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a15828c8-da5a-4bb0-9999-890b2908d0d1-kube-api-access-qsrl4" (OuterVolumeSpecName: "kube-api-access-qsrl4") pod "a15828c8-da5a-4bb0-9999-890b2908d0d1" (UID: "a15828c8-da5a-4bb0-9999-890b2908d0d1"). InnerVolumeSpecName "kube-api-access-qsrl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.362120 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsrl4\" (UniqueName: \"kubernetes.io/projected/a15828c8-da5a-4bb0-9999-890b2908d0d1-kube-api-access-qsrl4\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.362160 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.367916 4833 generic.go:334] "Generic (PLEG): container finished" podID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerID="0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f" exitCode=0 Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.367975 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m92x6" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.367997 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92x6" event={"ID":"a15828c8-da5a-4bb0-9999-890b2908d0d1","Type":"ContainerDied","Data":"0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f"} Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.368028 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m92x6" event={"ID":"a15828c8-da5a-4bb0-9999-890b2908d0d1","Type":"ContainerDied","Data":"126ab22104531b2db7ccb018744ccdfb1f934c30a47e14e8941dd99525ec8cf8"} Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.368047 4833 scope.go:117] "RemoveContainer" containerID="0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.369825 4833 generic.go:334] "Generic (PLEG): container finished" podID="88f49eed-0202-449a-a76e-23a3b8d0de12" containerID="b32e0e4e477a89289c6361ae69f1060bf188eb6b73d785b755015fdeb2e597a7" exitCode=0 Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.369847 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhl7l" event={"ID":"88f49eed-0202-449a-a76e-23a3b8d0de12","Type":"ContainerDied","Data":"b32e0e4e477a89289c6361ae69f1060bf188eb6b73d785b755015fdeb2e597a7"} Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.395526 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a15828c8-da5a-4bb0-9999-890b2908d0d1" (UID: "a15828c8-da5a-4bb0-9999-890b2908d0d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.402459 4833 scope.go:117] "RemoveContainer" containerID="24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.430317 4833 scope.go:117] "RemoveContainer" containerID="f0eb4bed7afaca2daed471a62678e60224d8f26b33a76c7159423a8fdc5ad6c9" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.449774 4833 scope.go:117] "RemoveContainer" containerID="0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f" Feb 19 12:59:54 crc kubenswrapper[4833]: E0219 12:59:54.450693 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f\": container with ID starting with 0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f not found: ID does not exist" containerID="0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.450737 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f"} err="failed to get container status \"0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f\": rpc error: code = NotFound desc = could not find container \"0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f\": container with ID starting with 0455a7bb5eb4d4bc0393c5c20e479ab843bfb372f98d3d1efb44295266631b5f not found: ID does not exist" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.450762 4833 scope.go:117] "RemoveContainer" containerID="24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f" Feb 19 12:59:54 crc kubenswrapper[4833]: E0219 12:59:54.451154 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f\": container with ID starting with 24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f not found: ID does not exist" containerID="24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.451209 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f"} err="failed to get container status \"24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f\": rpc error: code = NotFound desc = could not find container \"24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f\": container with ID starting with 24574a8b959646bfaefe6fdbdfbc5e0fa8e9b5d2df645276e9375893acdf7e5f not found: ID does not exist" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.451281 4833 scope.go:117] "RemoveContainer" containerID="f0eb4bed7afaca2daed471a62678e60224d8f26b33a76c7159423a8fdc5ad6c9" Feb 19 12:59:54 crc kubenswrapper[4833]: E0219 12:59:54.451707 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0eb4bed7afaca2daed471a62678e60224d8f26b33a76c7159423a8fdc5ad6c9\": container with ID starting with f0eb4bed7afaca2daed471a62678e60224d8f26b33a76c7159423a8fdc5ad6c9 not found: ID does not exist" containerID="f0eb4bed7afaca2daed471a62678e60224d8f26b33a76c7159423a8fdc5ad6c9" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.451731 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0eb4bed7afaca2daed471a62678e60224d8f26b33a76c7159423a8fdc5ad6c9"} err="failed to get container status \"f0eb4bed7afaca2daed471a62678e60224d8f26b33a76c7159423a8fdc5ad6c9\": rpc error: code = NotFound desc = could not find container \"f0eb4bed7afaca2daed471a62678e60224d8f26b33a76c7159423a8fdc5ad6c9\": container with ID starting with f0eb4bed7afaca2daed471a62678e60224d8f26b33a76c7159423a8fdc5ad6c9 not found: ID does not exist" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.463340 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a15828c8-da5a-4bb0-9999-890b2908d0d1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.707094 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m92x6"] Feb 19 12:59:54 crc kubenswrapper[4833]: I0219 12:59:54.714057 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m92x6"] Feb 19 12:59:55 crc kubenswrapper[4833]: I0219 12:59:55.380935 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhl7l" event={"ID":"88f49eed-0202-449a-a76e-23a3b8d0de12","Type":"ContainerStarted","Data":"90ef11b5455df69df9a5f7baebc2a6f88df782511b18cb4aa28bc8ff5b79fb97"} Feb 19 12:59:55 crc kubenswrapper[4833]: I0219 12:59:55.407098 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fhl7l" podStartSLOduration=2.974560027 podStartE2EDuration="5.407082544s" podCreationTimestamp="2026-02-19 12:59:50 +0000 UTC" firstStartedPulling="2026-02-19 12:59:52.356806289 +0000 UTC m=+802.752325057" lastFinishedPulling="2026-02-19 12:59:54.789328796 +0000 UTC m=+805.184847574" observedRunningTime="2026-02-19 12:59:55.405237976 +0000 UTC m=+805.800756794" watchObservedRunningTime="2026-02-19 12:59:55.407082544 +0000 UTC m=+805.802601322" Feb 19 12:59:56 crc kubenswrapper[4833]: I0219 12:59:56.324200 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a15828c8-da5a-4bb0-9999-890b2908d0d1" path="/var/lib/kubelet/pods/a15828c8-da5a-4bb0-9999-890b2908d0d1/volumes" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.175738 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb"] Feb 19 13:00:00 crc kubenswrapper[4833]: E0219 13:00:00.176312 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerName="extract-utilities" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.176330 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerName="extract-utilities" Feb 19 13:00:00 crc kubenswrapper[4833]: E0219 13:00:00.176353 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerName="extract-content" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.176361 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerName="extract-content" Feb 19 13:00:00 crc kubenswrapper[4833]: E0219 13:00:00.176369 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerName="registry-server" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.176377 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerName="registry-server" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.176536 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a15828c8-da5a-4bb0-9999-890b2908d0d1" containerName="registry-server" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.177022 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.179266 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.180372 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.206418 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb"] Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.328884 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzr64\" (UniqueName: \"kubernetes.io/projected/eeddd17e-bcb2-4887-a818-e4617fc9599a-kube-api-access-fzr64\") pod \"collect-profiles-29525100-99zqb\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.328935 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeddd17e-bcb2-4887-a818-e4617fc9599a-config-volume\") pod \"collect-profiles-29525100-99zqb\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.329328 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeddd17e-bcb2-4887-a818-e4617fc9599a-secret-volume\") pod \"collect-profiles-29525100-99zqb\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.430709 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeddd17e-bcb2-4887-a818-e4617fc9599a-secret-volume\") pod \"collect-profiles-29525100-99zqb\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.430821 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzr64\" (UniqueName: \"kubernetes.io/projected/eeddd17e-bcb2-4887-a818-e4617fc9599a-kube-api-access-fzr64\") pod \"collect-profiles-29525100-99zqb\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.430850 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeddd17e-bcb2-4887-a818-e4617fc9599a-config-volume\") pod \"collect-profiles-29525100-99zqb\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.432017 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeddd17e-bcb2-4887-a818-e4617fc9599a-config-volume\") pod \"collect-profiles-29525100-99zqb\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.438373 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeddd17e-bcb2-4887-a818-e4617fc9599a-secret-volume\") pod \"collect-profiles-29525100-99zqb\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.457236 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzr64\" (UniqueName: \"kubernetes.io/projected/eeddd17e-bcb2-4887-a818-e4617fc9599a-kube-api-access-fzr64\") pod \"collect-profiles-29525100-99zqb\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.513711 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.780665 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb"] Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.966905 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fhl7l" Feb 19 13:00:00 crc kubenswrapper[4833]: I0219 13:00:00.966963 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fhl7l" Feb 19 13:00:01 crc kubenswrapper[4833]: I0219 13:00:01.017079 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fhl7l" Feb 19 13:00:01 crc kubenswrapper[4833]: I0219 13:00:01.419394 4833 generic.go:334] "Generic (PLEG): container finished" podID="eeddd17e-bcb2-4887-a818-e4617fc9599a" containerID="d05530159aa02b1161112011c7d1bbd785fcfcd80ca8ddaa06f8d02ca065a401" exitCode=0 Feb 19 13:00:01 crc kubenswrapper[4833]: I0219 13:00:01.420784 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" event={"ID":"eeddd17e-bcb2-4887-a818-e4617fc9599a","Type":"ContainerDied","Data":"d05530159aa02b1161112011c7d1bbd785fcfcd80ca8ddaa06f8d02ca065a401"} Feb 19 13:00:01 crc kubenswrapper[4833]: I0219 13:00:01.420813 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" event={"ID":"eeddd17e-bcb2-4887-a818-e4617fc9599a","Type":"ContainerStarted","Data":"92a82fb865b798dfcfbbb0cd7f070e7a5219f58e0a3477e6b055501997a0dd12"} Feb 19 13:00:01 crc kubenswrapper[4833]: I0219 13:00:01.476041 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fhl7l" Feb 19 13:00:02 crc kubenswrapper[4833]: I0219 13:00:02.709606 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:02 crc kubenswrapper[4833]: I0219 13:00:02.771518 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeddd17e-bcb2-4887-a818-e4617fc9599a-config-volume\") pod \"eeddd17e-bcb2-4887-a818-e4617fc9599a\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " Feb 19 13:00:02 crc kubenswrapper[4833]: I0219 13:00:02.771589 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzr64\" (UniqueName: \"kubernetes.io/projected/eeddd17e-bcb2-4887-a818-e4617fc9599a-kube-api-access-fzr64\") pod \"eeddd17e-bcb2-4887-a818-e4617fc9599a\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " Feb 19 13:00:02 crc kubenswrapper[4833]: I0219 13:00:02.771626 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeddd17e-bcb2-4887-a818-e4617fc9599a-secret-volume\") pod \"eeddd17e-bcb2-4887-a818-e4617fc9599a\" (UID: \"eeddd17e-bcb2-4887-a818-e4617fc9599a\") " Feb 19 13:00:02 crc kubenswrapper[4833]: I0219 13:00:02.772275 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeddd17e-bcb2-4887-a818-e4617fc9599a-config-volume" (OuterVolumeSpecName: "config-volume") pod "eeddd17e-bcb2-4887-a818-e4617fc9599a" (UID: "eeddd17e-bcb2-4887-a818-e4617fc9599a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:00:02 crc kubenswrapper[4833]: I0219 13:00:02.776816 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeddd17e-bcb2-4887-a818-e4617fc9599a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eeddd17e-bcb2-4887-a818-e4617fc9599a" (UID: "eeddd17e-bcb2-4887-a818-e4617fc9599a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:00:02 crc kubenswrapper[4833]: I0219 13:00:02.777596 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeddd17e-bcb2-4887-a818-e4617fc9599a-kube-api-access-fzr64" (OuterVolumeSpecName: "kube-api-access-fzr64") pod "eeddd17e-bcb2-4887-a818-e4617fc9599a" (UID: "eeddd17e-bcb2-4887-a818-e4617fc9599a"). InnerVolumeSpecName "kube-api-access-fzr64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:00:02 crc kubenswrapper[4833]: I0219 13:00:02.872479 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzr64\" (UniqueName: \"kubernetes.io/projected/eeddd17e-bcb2-4887-a818-e4617fc9599a-kube-api-access-fzr64\") on node \"crc\" DevicePath \"\"" Feb 19 13:00:02 crc kubenswrapper[4833]: I0219 13:00:02.872540 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeddd17e-bcb2-4887-a818-e4617fc9599a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:00:02 crc kubenswrapper[4833]: I0219 13:00:02.872549 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeddd17e-bcb2-4887-a818-e4617fc9599a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:00:03 crc kubenswrapper[4833]: I0219 13:00:03.432999 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" event={"ID":"eeddd17e-bcb2-4887-a818-e4617fc9599a","Type":"ContainerDied","Data":"92a82fb865b798dfcfbbb0cd7f070e7a5219f58e0a3477e6b055501997a0dd12"} Feb 19 13:00:03 crc kubenswrapper[4833]: I0219 13:00:03.433044 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92a82fb865b798dfcfbbb0cd7f070e7a5219f58e0a3477e6b055501997a0dd12" Feb 19 13:00:03 crc kubenswrapper[4833]: I0219 13:00:03.433117 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb" Feb 19 13:00:03 crc kubenswrapper[4833]: I0219 13:00:03.744322 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-55ff9b8c6-prv8k" Feb 19 13:00:04 crc kubenswrapper[4833]: I0219 13:00:04.015581 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fhl7l"] Feb 19 13:00:04 crc kubenswrapper[4833]: I0219 13:00:04.015840 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fhl7l" podUID="88f49eed-0202-449a-a76e-23a3b8d0de12" containerName="registry-server" containerID="cri-o://90ef11b5455df69df9a5f7baebc2a6f88df782511b18cb4aa28bc8ff5b79fb97" gracePeriod=2 Feb 19 13:00:05 crc kubenswrapper[4833]: I0219 13:00:05.444303 4833 generic.go:334] "Generic (PLEG): container finished" podID="88f49eed-0202-449a-a76e-23a3b8d0de12" containerID="90ef11b5455df69df9a5f7baebc2a6f88df782511b18cb4aa28bc8ff5b79fb97" exitCode=0 Feb 19 13:00:05 crc kubenswrapper[4833]: I0219 13:00:05.444383 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhl7l" event={"ID":"88f49eed-0202-449a-a76e-23a3b8d0de12","Type":"ContainerDied","Data":"90ef11b5455df69df9a5f7baebc2a6f88df782511b18cb4aa28bc8ff5b79fb97"} Feb 19 13:00:05 crc kubenswrapper[4833]: I0219 13:00:05.720104 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhl7l" Feb 19 13:00:05 crc kubenswrapper[4833]: I0219 13:00:05.911120 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-utilities\") pod \"88f49eed-0202-449a-a76e-23a3b8d0de12\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " Feb 19 13:00:05 crc kubenswrapper[4833]: I0219 13:00:05.911225 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-catalog-content\") pod \"88f49eed-0202-449a-a76e-23a3b8d0de12\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " Feb 19 13:00:05 crc kubenswrapper[4833]: I0219 13:00:05.911288 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pdr8\" (UniqueName: \"kubernetes.io/projected/88f49eed-0202-449a-a76e-23a3b8d0de12-kube-api-access-2pdr8\") pod \"88f49eed-0202-449a-a76e-23a3b8d0de12\" (UID: \"88f49eed-0202-449a-a76e-23a3b8d0de12\") " Feb 19 13:00:05 crc kubenswrapper[4833]: I0219 13:00:05.911923 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-utilities" (OuterVolumeSpecName: "utilities") pod "88f49eed-0202-449a-a76e-23a3b8d0de12" (UID: "88f49eed-0202-449a-a76e-23a3b8d0de12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:00:05 crc kubenswrapper[4833]: I0219 13:00:05.917630 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f49eed-0202-449a-a76e-23a3b8d0de12-kube-api-access-2pdr8" (OuterVolumeSpecName: "kube-api-access-2pdr8") pod "88f49eed-0202-449a-a76e-23a3b8d0de12" (UID: "88f49eed-0202-449a-a76e-23a3b8d0de12"). InnerVolumeSpecName "kube-api-access-2pdr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:00:05 crc kubenswrapper[4833]: I0219 13:00:05.957116 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88f49eed-0202-449a-a76e-23a3b8d0de12" (UID: "88f49eed-0202-449a-a76e-23a3b8d0de12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:00:06 crc kubenswrapper[4833]: I0219 13:00:06.012578 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pdr8\" (UniqueName: \"kubernetes.io/projected/88f49eed-0202-449a-a76e-23a3b8d0de12-kube-api-access-2pdr8\") on node \"crc\" DevicePath \"\"" Feb 19 13:00:06 crc kubenswrapper[4833]: I0219 13:00:06.012617 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:00:06 crc kubenswrapper[4833]: I0219 13:00:06.012631 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f49eed-0202-449a-a76e-23a3b8d0de12-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:00:06 crc kubenswrapper[4833]: I0219 13:00:06.452391 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fhl7l" event={"ID":"88f49eed-0202-449a-a76e-23a3b8d0de12","Type":"ContainerDied","Data":"cef7a94590f1b239548545a7ef6c908a7ce6adb1e77526dbb2ff2c9cacec6818"} Feb 19 13:00:06 crc kubenswrapper[4833]: I0219 13:00:06.453206 4833 scope.go:117] "RemoveContainer" containerID="90ef11b5455df69df9a5f7baebc2a6f88df782511b18cb4aa28bc8ff5b79fb97" Feb 19 13:00:06 crc kubenswrapper[4833]: I0219 13:00:06.452527 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fhl7l" Feb 19 13:00:06 crc kubenswrapper[4833]: I0219 13:00:06.471857 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fhl7l"] Feb 19 13:00:06 crc kubenswrapper[4833]: I0219 13:00:06.476816 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fhl7l"] Feb 19 13:00:06 crc kubenswrapper[4833]: I0219 13:00:06.484831 4833 scope.go:117] "RemoveContainer" containerID="b32e0e4e477a89289c6361ae69f1060bf188eb6b73d785b755015fdeb2e597a7" Feb 19 13:00:06 crc kubenswrapper[4833]: I0219 13:00:06.503077 4833 scope.go:117] "RemoveContainer" containerID="bb9cb947e409e4441611bed8c87b20cb179c0a8f958492f7b489ffe9ed44159a" Feb 19 13:00:08 crc kubenswrapper[4833]: I0219 13:00:08.324006 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88f49eed-0202-449a-a76e-23a3b8d0de12" path="/var/lib/kubelet/pods/88f49eed-0202-449a-a76e-23a3b8d0de12/volumes" Feb 19 13:00:23 crc kubenswrapper[4833]: I0219 13:00:23.509923 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-595bc44cf4-flp9d" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.355192 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5dn4l"] Feb 19 13:00:24 crc kubenswrapper[4833]: E0219 13:00:24.355896 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f49eed-0202-449a-a76e-23a3b8d0de12" containerName="registry-server" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.355927 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f49eed-0202-449a-a76e-23a3b8d0de12" containerName="registry-server" Feb 19 13:00:24 crc kubenswrapper[4833]: E0219 13:00:24.355952 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f49eed-0202-449a-a76e-23a3b8d0de12" containerName="extract-utilities" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.355965 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f49eed-0202-449a-a76e-23a3b8d0de12" containerName="extract-utilities" Feb 19 13:00:24 crc kubenswrapper[4833]: E0219 13:00:24.355989 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f49eed-0202-449a-a76e-23a3b8d0de12" containerName="extract-content" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.356002 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f49eed-0202-449a-a76e-23a3b8d0de12" containerName="extract-content" Feb 19 13:00:24 crc kubenswrapper[4833]: E0219 13:00:24.356022 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeddd17e-bcb2-4887-a818-e4617fc9599a" containerName="collect-profiles" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.356034 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeddd17e-bcb2-4887-a818-e4617fc9599a" containerName="collect-profiles" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.356212 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeddd17e-bcb2-4887-a818-e4617fc9599a" containerName="collect-profiles" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.356239 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f49eed-0202-449a-a76e-23a3b8d0de12" containerName="registry-server" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.362513 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.365142 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cwt7k" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.365144 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.365303 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.381198 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r"] Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.381964 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.390519 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.391011 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r"] Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.436359 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-9tx4p"] Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.437390 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.439739 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-cgfgh" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.440447 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.440456 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.440678 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.450130 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-7hv4q"] Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.450967 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.454103 4833 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.471014 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-7hv4q"] Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.494031 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-reloader\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.494075 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-metrics\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.494170 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62869774-530a-477d-bac0-df5e4fba9daa-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-69h4r\" (UID: \"62869774-530a-477d-bac0-df5e4fba9daa\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.494271 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baf68531-b18e-4d82-9787-08a0c9381707-metrics-certs\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.494332 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kc5j\" (UniqueName: \"kubernetes.io/projected/baf68531-b18e-4d82-9787-08a0c9381707-kube-api-access-5kc5j\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.494565 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-frr-sockets\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.494587 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/baf68531-b18e-4d82-9787-08a0c9381707-frr-startup\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.494720 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-frr-conf\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.494848 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5jf\" (UniqueName: \"kubernetes.io/projected/62869774-530a-477d-bac0-df5e4fba9daa-kube-api-access-pg5jf\") pod \"frr-k8s-webhook-server-78b44bf5bb-69h4r\" (UID: \"62869774-530a-477d-bac0-df5e4fba9daa\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596186 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baf68531-b18e-4d82-9787-08a0c9381707-metrics-certs\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596258 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kc5j\" (UniqueName: \"kubernetes.io/projected/baf68531-b18e-4d82-9787-08a0c9381707-kube-api-access-5kc5j\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596296 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-frr-sockets\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596316 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/baf68531-b18e-4d82-9787-08a0c9381707-frr-startup\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596343 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/810d0dc6-4fd1-4c62-838b-f759e361ea26-cert\") pod \"controller-69bbfbf88f-7hv4q\" (UID: \"810d0dc6-4fd1-4c62-838b-f759e361ea26\") " pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596367 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h27ws\" (UniqueName: \"kubernetes.io/projected/810d0dc6-4fd1-4c62-838b-f759e361ea26-kube-api-access-h27ws\") pod \"controller-69bbfbf88f-7hv4q\" (UID: \"810d0dc6-4fd1-4c62-838b-f759e361ea26\") " pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596389 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-frr-conf\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596413 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg5jf\" (UniqueName: \"kubernetes.io/projected/62869774-530a-477d-bac0-df5e4fba9daa-kube-api-access-pg5jf\") pod \"frr-k8s-webhook-server-78b44bf5bb-69h4r\" (UID: \"62869774-530a-477d-bac0-df5e4fba9daa\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596556 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-metrics-certs\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5gkk\" (UniqueName: \"kubernetes.io/projected/e5afb617-4d1b-4c96-a669-f669e870501f-kube-api-access-r5gkk\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596707 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/810d0dc6-4fd1-4c62-838b-f759e361ea26-metrics-certs\") pod \"controller-69bbfbf88f-7hv4q\" (UID: \"810d0dc6-4fd1-4c62-838b-f759e361ea26\") " pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596730 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-reloader\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596755 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e5afb617-4d1b-4c96-a669-f669e870501f-metallb-excludel2\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596781 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-metrics\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596816 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62869774-530a-477d-bac0-df5e4fba9daa-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-69h4r\" (UID: \"62869774-530a-477d-bac0-df5e4fba9daa\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596821 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-frr-sockets\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.596913 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-frr-conf\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.597005 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-memberlist\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.597082 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-reloader\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.597120 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/baf68531-b18e-4d82-9787-08a0c9381707-metrics\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.597409 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/baf68531-b18e-4d82-9787-08a0c9381707-frr-startup\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.606296 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/baf68531-b18e-4d82-9787-08a0c9381707-metrics-certs\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.612036 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/62869774-530a-477d-bac0-df5e4fba9daa-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-69h4r\" (UID: \"62869774-530a-477d-bac0-df5e4fba9daa\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.617918 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg5jf\" (UniqueName: \"kubernetes.io/projected/62869774-530a-477d-bac0-df5e4fba9daa-kube-api-access-pg5jf\") pod \"frr-k8s-webhook-server-78b44bf5bb-69h4r\" (UID: \"62869774-530a-477d-bac0-df5e4fba9daa\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.626398 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kc5j\" (UniqueName: \"kubernetes.io/projected/baf68531-b18e-4d82-9787-08a0c9381707-kube-api-access-5kc5j\") pod \"frr-k8s-5dn4l\" (UID: \"baf68531-b18e-4d82-9787-08a0c9381707\") " pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.681534 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.697300 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.699726 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/810d0dc6-4fd1-4c62-838b-f759e361ea26-cert\") pod \"controller-69bbfbf88f-7hv4q\" (UID: \"810d0dc6-4fd1-4c62-838b-f759e361ea26\") " pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.699768 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h27ws\" (UniqueName: \"kubernetes.io/projected/810d0dc6-4fd1-4c62-838b-f759e361ea26-kube-api-access-h27ws\") pod \"controller-69bbfbf88f-7hv4q\" (UID: \"810d0dc6-4fd1-4c62-838b-f759e361ea26\") " pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.699806 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-metrics-certs\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.699838 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5gkk\" (UniqueName: \"kubernetes.io/projected/e5afb617-4d1b-4c96-a669-f669e870501f-kube-api-access-r5gkk\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.699857 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/810d0dc6-4fd1-4c62-838b-f759e361ea26-metrics-certs\") pod \"controller-69bbfbf88f-7hv4q\" (UID: \"810d0dc6-4fd1-4c62-838b-f759e361ea26\") " pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.699876 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e5afb617-4d1b-4c96-a669-f669e870501f-metallb-excludel2\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.699908 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-memberlist\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: E0219 13:00:24.700029 4833 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 13:00:24 crc kubenswrapper[4833]: E0219 13:00:24.700097 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-memberlist podName:e5afb617-4d1b-4c96-a669-f669e870501f nodeName:}" failed. No retries permitted until 2026-02-19 13:00:25.200079952 +0000 UTC m=+835.595598720 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-memberlist") pod "speaker-9tx4p" (UID: "e5afb617-4d1b-4c96-a669-f669e870501f") : secret "metallb-memberlist" not found Feb 19 13:00:24 crc kubenswrapper[4833]: E0219 13:00:24.701751 4833 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 19 13:00:24 crc kubenswrapper[4833]: E0219 13:00:24.701872 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/810d0dc6-4fd1-4c62-838b-f759e361ea26-metrics-certs podName:810d0dc6-4fd1-4c62-838b-f759e361ea26 nodeName:}" failed. No retries permitted until 2026-02-19 13:00:25.201804238 +0000 UTC m=+835.597323066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/810d0dc6-4fd1-4c62-838b-f759e361ea26-metrics-certs") pod "controller-69bbfbf88f-7hv4q" (UID: "810d0dc6-4fd1-4c62-838b-f759e361ea26") : secret "controller-certs-secret" not found Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.701875 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e5afb617-4d1b-4c96-a669-f669e870501f-metallb-excludel2\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.703760 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-metrics-certs\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.703938 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/810d0dc6-4fd1-4c62-838b-f759e361ea26-cert\") pod \"controller-69bbfbf88f-7hv4q\" (UID: \"810d0dc6-4fd1-4c62-838b-f759e361ea26\") " pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.717154 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h27ws\" (UniqueName: \"kubernetes.io/projected/810d0dc6-4fd1-4c62-838b-f759e361ea26-kube-api-access-h27ws\") pod \"controller-69bbfbf88f-7hv4q\" (UID: \"810d0dc6-4fd1-4c62-838b-f759e361ea26\") " pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.719214 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5gkk\" (UniqueName: \"kubernetes.io/projected/e5afb617-4d1b-4c96-a669-f669e870501f-kube-api-access-r5gkk\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:24 crc kubenswrapper[4833]: I0219 13:00:24.900718 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r"] Feb 19 13:00:24 crc kubenswrapper[4833]: W0219 13:00:24.909604 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62869774_530a_477d_bac0_df5e4fba9daa.slice/crio-49a270d091b2858623e9c5ec2be8de9b1861a2d2260a137d0dda0526b102e905 WatchSource:0}: Error finding container 49a270d091b2858623e9c5ec2be8de9b1861a2d2260a137d0dda0526b102e905: Status 404 returned error can't find the container with id 49a270d091b2858623e9c5ec2be8de9b1861a2d2260a137d0dda0526b102e905 Feb 19 13:00:25 crc kubenswrapper[4833]: I0219 13:00:25.204321 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/810d0dc6-4fd1-4c62-838b-f759e361ea26-metrics-certs\") pod \"controller-69bbfbf88f-7hv4q\" (UID: \"810d0dc6-4fd1-4c62-838b-f759e361ea26\") " pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:25 crc kubenswrapper[4833]: I0219 13:00:25.204399 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-memberlist\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:25 crc kubenswrapper[4833]: E0219 13:00:25.204572 4833 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 19 13:00:25 crc kubenswrapper[4833]: E0219 13:00:25.204632 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-memberlist podName:e5afb617-4d1b-4c96-a669-f669e870501f nodeName:}" failed. No retries permitted until 2026-02-19 13:00:26.20461396 +0000 UTC m=+836.600132728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-memberlist") pod "speaker-9tx4p" (UID: "e5afb617-4d1b-4c96-a669-f669e870501f") : secret "metallb-memberlist" not found Feb 19 13:00:25 crc kubenswrapper[4833]: I0219 13:00:25.209715 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/810d0dc6-4fd1-4c62-838b-f759e361ea26-metrics-certs\") pod \"controller-69bbfbf88f-7hv4q\" (UID: \"810d0dc6-4fd1-4c62-838b-f759e361ea26\") " pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:25 crc kubenswrapper[4833]: I0219 13:00:25.364887 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:25 crc kubenswrapper[4833]: I0219 13:00:25.567798 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5dn4l" event={"ID":"baf68531-b18e-4d82-9787-08a0c9381707","Type":"ContainerStarted","Data":"9115329c139a961869dee0a16fea4344aa2e171e9477b1f22f2eae5334f18a18"} Feb 19 13:00:25 crc kubenswrapper[4833]: I0219 13:00:25.568528 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" event={"ID":"62869774-530a-477d-bac0-df5e4fba9daa","Type":"ContainerStarted","Data":"49a270d091b2858623e9c5ec2be8de9b1861a2d2260a137d0dda0526b102e905"} Feb 19 13:00:25 crc kubenswrapper[4833]: I0219 13:00:25.583909 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-7hv4q"] Feb 19 13:00:25 crc kubenswrapper[4833]: W0219 13:00:25.593727 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod810d0dc6_4fd1_4c62_838b_f759e361ea26.slice/crio-f9b916b887e66ea498106a7d47b01d5112651c45e39554feaf08059cc411b2ff WatchSource:0}: Error finding container f9b916b887e66ea498106a7d47b01d5112651c45e39554feaf08059cc411b2ff: Status 404 returned error can't find the container with id f9b916b887e66ea498106a7d47b01d5112651c45e39554feaf08059cc411b2ff Feb 19 13:00:26 crc kubenswrapper[4833]: I0219 13:00:26.234824 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-memberlist\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:26 crc kubenswrapper[4833]: I0219 13:00:26.239421 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e5afb617-4d1b-4c96-a669-f669e870501f-memberlist\") pod \"speaker-9tx4p\" (UID: \"e5afb617-4d1b-4c96-a669-f669e870501f\") " pod="metallb-system/speaker-9tx4p" Feb 19 13:00:26 crc kubenswrapper[4833]: I0219 13:00:26.251195 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-9tx4p" Feb 19 13:00:26 crc kubenswrapper[4833]: W0219 13:00:26.275156 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5afb617_4d1b_4c96_a669_f669e870501f.slice/crio-6ad835152ab88238a8a93ecd040ea8a4265b68885c0db419cdd42d38e49f2dbf WatchSource:0}: Error finding container 6ad835152ab88238a8a93ecd040ea8a4265b68885c0db419cdd42d38e49f2dbf: Status 404 returned error can't find the container with id 6ad835152ab88238a8a93ecd040ea8a4265b68885c0db419cdd42d38e49f2dbf Feb 19 13:00:26 crc kubenswrapper[4833]: I0219 13:00:26.581224 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9tx4p" event={"ID":"e5afb617-4d1b-4c96-a669-f669e870501f","Type":"ContainerStarted","Data":"6ad835152ab88238a8a93ecd040ea8a4265b68885c0db419cdd42d38e49f2dbf"} Feb 19 13:00:26 crc kubenswrapper[4833]: I0219 13:00:26.584360 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7hv4q" event={"ID":"810d0dc6-4fd1-4c62-838b-f759e361ea26","Type":"ContainerStarted","Data":"4d504f65d57706648100e1e98779d007ca03e3253ab4e719f98ba2a9f807d114"} Feb 19 13:00:26 crc kubenswrapper[4833]: I0219 13:00:26.584387 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7hv4q" event={"ID":"810d0dc6-4fd1-4c62-838b-f759e361ea26","Type":"ContainerStarted","Data":"418b5285933e876caadfbbe0017eccb42beb3d31b89c0bf15ca82bf82542c56b"} Feb 19 13:00:26 crc kubenswrapper[4833]: I0219 13:00:26.584397 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-7hv4q" event={"ID":"810d0dc6-4fd1-4c62-838b-f759e361ea26","Type":"ContainerStarted","Data":"f9b916b887e66ea498106a7d47b01d5112651c45e39554feaf08059cc411b2ff"} Feb 19 13:00:26 crc kubenswrapper[4833]: I0219 13:00:26.584551 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:26 crc kubenswrapper[4833]: I0219 13:00:26.601773 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-7hv4q" podStartSLOduration=2.601757177 podStartE2EDuration="2.601757177s" podCreationTimestamp="2026-02-19 13:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:00:26.600016461 +0000 UTC m=+836.995535229" watchObservedRunningTime="2026-02-19 13:00:26.601757177 +0000 UTC m=+836.997275945" Feb 19 13:00:27 crc kubenswrapper[4833]: I0219 13:00:27.593457 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9tx4p" event={"ID":"e5afb617-4d1b-4c96-a669-f669e870501f","Type":"ContainerStarted","Data":"263a34da0300efa1508a690976be4f5b6ce6169fed66837bfec925ad2be02e63"} Feb 19 13:00:27 crc kubenswrapper[4833]: I0219 13:00:27.593866 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-9tx4p" event={"ID":"e5afb617-4d1b-4c96-a669-f669e870501f","Type":"ContainerStarted","Data":"ce8182c3c44f32ace7dfdb820c9b68ad92b904f536d9c6408085a02b936001ec"} Feb 19 13:00:27 crc kubenswrapper[4833]: I0219 13:00:27.608908 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-9tx4p" podStartSLOduration=3.608891931 podStartE2EDuration="3.608891931s" podCreationTimestamp="2026-02-19 13:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:00:27.605989114 +0000 UTC m=+838.001507892" watchObservedRunningTime="2026-02-19 13:00:27.608891931 +0000 UTC m=+838.004410699" Feb 19 13:00:28 crc kubenswrapper[4833]: I0219 13:00:28.600014 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-9tx4p" Feb 19 13:00:35 crc kubenswrapper[4833]: I0219 13:00:35.370324 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-7hv4q" Feb 19 13:00:35 crc kubenswrapper[4833]: I0219 13:00:35.653815 4833 generic.go:334] "Generic (PLEG): container finished" podID="baf68531-b18e-4d82-9787-08a0c9381707" containerID="1f1cd6114949fe93b11fb37d1b6d011de8bb38f43d395086b9caeb2c9f1fb2d4" exitCode=0 Feb 19 13:00:35 crc kubenswrapper[4833]: I0219 13:00:35.653889 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5dn4l" event={"ID":"baf68531-b18e-4d82-9787-08a0c9381707","Type":"ContainerDied","Data":"1f1cd6114949fe93b11fb37d1b6d011de8bb38f43d395086b9caeb2c9f1fb2d4"} Feb 19 13:00:35 crc kubenswrapper[4833]: I0219 13:00:35.655736 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" event={"ID":"62869774-530a-477d-bac0-df5e4fba9daa","Type":"ContainerStarted","Data":"a5fa72daf8cbdaf6fba726d2bf625acd3d063f73f8fc48c36831d64742822469"} Feb 19 13:00:35 crc kubenswrapper[4833]: I0219 13:00:35.656065 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" Feb 19 13:00:35 crc kubenswrapper[4833]: I0219 13:00:35.715222 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" podStartSLOduration=1.544294697 podStartE2EDuration="11.715199259s" podCreationTimestamp="2026-02-19 13:00:24 +0000 UTC" firstStartedPulling="2026-02-19 13:00:24.916309424 +0000 UTC m=+835.311828192" lastFinishedPulling="2026-02-19 13:00:35.087213986 +0000 UTC m=+845.482732754" observedRunningTime="2026-02-19 13:00:35.714992274 +0000 UTC m=+846.110511042" watchObservedRunningTime="2026-02-19 13:00:35.715199259 +0000 UTC m=+846.110718027" Feb 19 13:00:36 crc kubenswrapper[4833]: I0219 13:00:36.259429 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-9tx4p" Feb 19 13:00:36 crc kubenswrapper[4833]: I0219 13:00:36.663434 4833 generic.go:334] "Generic (PLEG): container finished" podID="baf68531-b18e-4d82-9787-08a0c9381707" containerID="a663910ccd955e667f019c67cab0a2fc9a3eacbe24740ef49391750d6ea388f8" exitCode=0 Feb 19 13:00:36 crc kubenswrapper[4833]: I0219 13:00:36.663527 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5dn4l" event={"ID":"baf68531-b18e-4d82-9787-08a0c9381707","Type":"ContainerDied","Data":"a663910ccd955e667f019c67cab0a2fc9a3eacbe24740ef49391750d6ea388f8"} Feb 19 13:00:37 crc kubenswrapper[4833]: I0219 13:00:37.671920 4833 generic.go:334] "Generic (PLEG): container finished" podID="baf68531-b18e-4d82-9787-08a0c9381707" containerID="df136b1ee090a1bdc512c7cfd6769a734b0e94b05832a465ce4dee2feaf071ac" exitCode=0 Feb 19 13:00:37 crc kubenswrapper[4833]: I0219 13:00:37.671986 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5dn4l" event={"ID":"baf68531-b18e-4d82-9787-08a0c9381707","Type":"ContainerDied","Data":"df136b1ee090a1bdc512c7cfd6769a734b0e94b05832a465ce4dee2feaf071ac"} Feb 19 13:00:38 crc kubenswrapper[4833]: I0219 13:00:38.680838 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5dn4l" event={"ID":"baf68531-b18e-4d82-9787-08a0c9381707","Type":"ContainerStarted","Data":"b0f436b9ad060717580205decd42b0db0a0ed8504c709f9128a2cfdd4566bf29"} Feb 19 13:00:39 crc kubenswrapper[4833]: I0219 13:00:39.005022 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-bt8dg"] Feb 19 13:00:39 crc kubenswrapper[4833]: I0219 13:00:39.005938 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bt8dg" Feb 19 13:00:39 crc kubenswrapper[4833]: I0219 13:00:39.007865 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-pg8zn" Feb 19 13:00:39 crc kubenswrapper[4833]: I0219 13:00:39.008071 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 19 13:00:39 crc kubenswrapper[4833]: I0219 13:00:39.008609 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 19 13:00:39 crc kubenswrapper[4833]: I0219 13:00:39.021162 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bt8dg"] Feb 19 13:00:39 crc kubenswrapper[4833]: I0219 13:00:39.114295 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdnl6\" (UniqueName: \"kubernetes.io/projected/ff1b56c9-520d-42fc-8f48-0180dbde9c5e-kube-api-access-sdnl6\") pod \"openstack-operator-index-bt8dg\" (UID: \"ff1b56c9-520d-42fc-8f48-0180dbde9c5e\") " pod="openstack-operators/openstack-operator-index-bt8dg" Feb 19 13:00:39 crc kubenswrapper[4833]: I0219 13:00:39.216400 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdnl6\" (UniqueName: \"kubernetes.io/projected/ff1b56c9-520d-42fc-8f48-0180dbde9c5e-kube-api-access-sdnl6\") pod \"openstack-operator-index-bt8dg\" (UID: \"ff1b56c9-520d-42fc-8f48-0180dbde9c5e\") " pod="openstack-operators/openstack-operator-index-bt8dg" Feb 19 13:00:39 crc kubenswrapper[4833]: I0219 13:00:39.241928 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdnl6\" (UniqueName: \"kubernetes.io/projected/ff1b56c9-520d-42fc-8f48-0180dbde9c5e-kube-api-access-sdnl6\") pod \"openstack-operator-index-bt8dg\" (UID: \"ff1b56c9-520d-42fc-8f48-0180dbde9c5e\") " pod="openstack-operators/openstack-operator-index-bt8dg" Feb 19 13:00:39 crc kubenswrapper[4833]: I0219 13:00:39.329394 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bt8dg" Feb 19 13:00:39 crc kubenswrapper[4833]: I0219 13:00:39.811111 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-bt8dg"] Feb 19 13:00:40 crc kubenswrapper[4833]: I0219 13:00:40.703794 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5dn4l" event={"ID":"baf68531-b18e-4d82-9787-08a0c9381707","Type":"ContainerStarted","Data":"56eda3eb6bc7f1cf32ea003e87ae989d422304f37378ee407fb102d3d3031f8e"} Feb 19 13:00:40 crc kubenswrapper[4833]: I0219 13:00:40.703845 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5dn4l" event={"ID":"baf68531-b18e-4d82-9787-08a0c9381707","Type":"ContainerStarted","Data":"124527e726529184342c91f46114bae613d07819f3137be6f717ae74c659263f"} Feb 19 13:00:40 crc kubenswrapper[4833]: I0219 13:00:40.705357 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bt8dg" event={"ID":"ff1b56c9-520d-42fc-8f48-0180dbde9c5e","Type":"ContainerStarted","Data":"f924110ee5ad368c7e188e4ef8f48236890ceec38bd05da4f13e1151c7e5ca41"} Feb 19 13:00:42 crc kubenswrapper[4833]: I0219 13:00:42.187890 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bt8dg"] Feb 19 13:00:42 crc kubenswrapper[4833]: I0219 13:00:42.809774 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8gfsk"] Feb 19 13:00:42 crc kubenswrapper[4833]: I0219 13:00:42.810838 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8gfsk"] Feb 19 13:00:42 crc kubenswrapper[4833]: I0219 13:00:42.810918 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8gfsk" Feb 19 13:00:42 crc kubenswrapper[4833]: I0219 13:00:42.866747 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65xzs\" (UniqueName: \"kubernetes.io/projected/8f9a2baf-c1a1-48a6-baa9-e73ad2dcac6e-kube-api-access-65xzs\") pod \"openstack-operator-index-8gfsk\" (UID: \"8f9a2baf-c1a1-48a6-baa9-e73ad2dcac6e\") " pod="openstack-operators/openstack-operator-index-8gfsk" Feb 19 13:00:42 crc kubenswrapper[4833]: I0219 13:00:42.968396 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65xzs\" (UniqueName: \"kubernetes.io/projected/8f9a2baf-c1a1-48a6-baa9-e73ad2dcac6e-kube-api-access-65xzs\") pod \"openstack-operator-index-8gfsk\" (UID: \"8f9a2baf-c1a1-48a6-baa9-e73ad2dcac6e\") " pod="openstack-operators/openstack-operator-index-8gfsk" Feb 19 13:00:42 crc kubenswrapper[4833]: I0219 13:00:42.996028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65xzs\" (UniqueName: \"kubernetes.io/projected/8f9a2baf-c1a1-48a6-baa9-e73ad2dcac6e-kube-api-access-65xzs\") pod \"openstack-operator-index-8gfsk\" (UID: \"8f9a2baf-c1a1-48a6-baa9-e73ad2dcac6e\") " pod="openstack-operators/openstack-operator-index-8gfsk" Feb 19 13:00:43 crc kubenswrapper[4833]: I0219 13:00:43.146316 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8gfsk" Feb 19 13:00:43 crc kubenswrapper[4833]: I0219 13:00:43.628349 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8gfsk"] Feb 19 13:00:43 crc kubenswrapper[4833]: W0219 13:00:43.641416 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f9a2baf_c1a1_48a6_baa9_e73ad2dcac6e.slice/crio-a21f27b2a1bc34ddaa804f867fe9331f9bd86ce7e024f7fb268e4e7d88796c30 WatchSource:0}: Error finding container a21f27b2a1bc34ddaa804f867fe9331f9bd86ce7e024f7fb268e4e7d88796c30: Status 404 returned error can't find the container with id a21f27b2a1bc34ddaa804f867fe9331f9bd86ce7e024f7fb268e4e7d88796c30 Feb 19 13:00:43 crc kubenswrapper[4833]: I0219 13:00:43.745264 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5dn4l" event={"ID":"baf68531-b18e-4d82-9787-08a0c9381707","Type":"ContainerStarted","Data":"0afe74b3ba47e05b7e59e16f765dea44ff63650ff492e632e6dcfc6b506b8391"} Feb 19 13:00:43 crc kubenswrapper[4833]: I0219 13:00:43.746959 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8gfsk" event={"ID":"8f9a2baf-c1a1-48a6-baa9-e73ad2dcac6e","Type":"ContainerStarted","Data":"a21f27b2a1bc34ddaa804f867fe9331f9bd86ce7e024f7fb268e4e7d88796c30"} Feb 19 13:00:44 crc kubenswrapper[4833]: I0219 13:00:44.760145 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5dn4l" event={"ID":"baf68531-b18e-4d82-9787-08a0c9381707","Type":"ContainerStarted","Data":"4bc49e88c1d83b4db0f6341a020436f2015b5ea117237cc78b0240461c1b08cc"} Feb 19 13:00:44 crc kubenswrapper[4833]: I0219 13:00:44.762527 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:44 crc kubenswrapper[4833]: I0219 13:00:44.762561 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5dn4l" event={"ID":"baf68531-b18e-4d82-9787-08a0c9381707","Type":"ContainerStarted","Data":"bb6b416a1becff7213601fa44625e10fa9ba0a83a0ab546be883220d8efe0109"} Feb 19 13:00:44 crc kubenswrapper[4833]: I0219 13:00:44.789443 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5dn4l" podStartSLOduration=10.796266811 podStartE2EDuration="20.789421938s" podCreationTimestamp="2026-02-19 13:00:24 +0000 UTC" firstStartedPulling="2026-02-19 13:00:25.072439249 +0000 UTC m=+835.467958017" lastFinishedPulling="2026-02-19 13:00:35.065594376 +0000 UTC m=+845.461113144" observedRunningTime="2026-02-19 13:00:44.787275672 +0000 UTC m=+855.182794450" watchObservedRunningTime="2026-02-19 13:00:44.789421938 +0000 UTC m=+855.184940716" Feb 19 13:00:49 crc kubenswrapper[4833]: I0219 13:00:49.682527 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:49 crc kubenswrapper[4833]: I0219 13:00:49.687239 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:49 crc kubenswrapper[4833]: I0219 13:00:49.720802 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5dn4l" Feb 19 13:00:51 crc kubenswrapper[4833]: I0219 13:00:51.822663 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bt8dg" event={"ID":"ff1b56c9-520d-42fc-8f48-0180dbde9c5e","Type":"ContainerStarted","Data":"c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8"} Feb 19 13:00:51 crc kubenswrapper[4833]: I0219 13:00:51.822731 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-bt8dg" podUID="ff1b56c9-520d-42fc-8f48-0180dbde9c5e" containerName="registry-server" containerID="cri-o://c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8" gracePeriod=2 Feb 19 13:00:51 crc kubenswrapper[4833]: I0219 13:00:51.825803 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8gfsk" event={"ID":"8f9a2baf-c1a1-48a6-baa9-e73ad2dcac6e","Type":"ContainerStarted","Data":"1f9cfe8706d559ecbb13e14be3edb65045561cba621ec7c15f5fac2c54f44961"} Feb 19 13:00:51 crc kubenswrapper[4833]: I0219 13:00:51.859024 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-bt8dg" podStartSLOduration=3.085055978 podStartE2EDuration="13.85899686s" podCreationTimestamp="2026-02-19 13:00:38 +0000 UTC" firstStartedPulling="2026-02-19 13:00:39.817654693 +0000 UTC m=+850.213173461" lastFinishedPulling="2026-02-19 13:00:50.591595535 +0000 UTC m=+860.987114343" observedRunningTime="2026-02-19 13:00:51.847904923 +0000 UTC m=+862.243423741" watchObservedRunningTime="2026-02-19 13:00:51.85899686 +0000 UTC m=+862.254515638" Feb 19 13:00:51 crc kubenswrapper[4833]: I0219 13:00:51.870657 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8gfsk" podStartSLOduration=2.913447618 podStartE2EDuration="9.870638971s" podCreationTimestamp="2026-02-19 13:00:42 +0000 UTC" firstStartedPulling="2026-02-19 13:00:43.645070778 +0000 UTC m=+854.040589556" lastFinishedPulling="2026-02-19 13:00:50.602262111 +0000 UTC m=+860.997780909" observedRunningTime="2026-02-19 13:00:51.867486909 +0000 UTC m=+862.263005717" watchObservedRunningTime="2026-02-19 13:00:51.870638971 +0000 UTC m=+862.266157749" Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.621741 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bt8dg" Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.726732 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdnl6\" (UniqueName: \"kubernetes.io/projected/ff1b56c9-520d-42fc-8f48-0180dbde9c5e-kube-api-access-sdnl6\") pod \"ff1b56c9-520d-42fc-8f48-0180dbde9c5e\" (UID: \"ff1b56c9-520d-42fc-8f48-0180dbde9c5e\") " Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.732540 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1b56c9-520d-42fc-8f48-0180dbde9c5e-kube-api-access-sdnl6" (OuterVolumeSpecName: "kube-api-access-sdnl6") pod "ff1b56c9-520d-42fc-8f48-0180dbde9c5e" (UID: "ff1b56c9-520d-42fc-8f48-0180dbde9c5e"). InnerVolumeSpecName "kube-api-access-sdnl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.828348 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdnl6\" (UniqueName: \"kubernetes.io/projected/ff1b56c9-520d-42fc-8f48-0180dbde9c5e-kube-api-access-sdnl6\") on node \"crc\" DevicePath \"\"" Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.838278 4833 generic.go:334] "Generic (PLEG): container finished" podID="ff1b56c9-520d-42fc-8f48-0180dbde9c5e" containerID="c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8" exitCode=0 Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.838377 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-bt8dg" Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.838439 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bt8dg" event={"ID":"ff1b56c9-520d-42fc-8f48-0180dbde9c5e","Type":"ContainerDied","Data":"c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8"} Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.838489 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-bt8dg" event={"ID":"ff1b56c9-520d-42fc-8f48-0180dbde9c5e","Type":"ContainerDied","Data":"f924110ee5ad368c7e188e4ef8f48236890ceec38bd05da4f13e1151c7e5ca41"} Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.838548 4833 scope.go:117] "RemoveContainer" containerID="c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8" Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.872878 4833 scope.go:117] "RemoveContainer" containerID="c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8" Feb 19 13:00:52 crc kubenswrapper[4833]: E0219 13:00:52.877959 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8\": container with ID starting with c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8 not found: ID does not exist" containerID="c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8" Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.878021 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8"} err="failed to get container status \"c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8\": rpc error: code = NotFound desc = could not find container \"c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8\": container with ID starting with c04a948f51311761c90fae9f29ba8c2d96a629a5d1c565896c57e5d95271b7b8 not found: ID does not exist" Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.895050 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-bt8dg"] Feb 19 13:00:52 crc kubenswrapper[4833]: I0219 13:00:52.903037 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-bt8dg"] Feb 19 13:00:53 crc kubenswrapper[4833]: I0219 13:00:53.147733 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8gfsk" Feb 19 13:00:53 crc kubenswrapper[4833]: I0219 13:00:53.148034 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8gfsk" Feb 19 13:00:53 crc kubenswrapper[4833]: I0219 13:00:53.191332 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8gfsk" Feb 19 13:00:54 crc kubenswrapper[4833]: I0219 13:00:54.324530 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1b56c9-520d-42fc-8f48-0180dbde9c5e" path="/var/lib/kubelet/pods/ff1b56c9-520d-42fc-8f48-0180dbde9c5e/volumes" Feb 19 13:00:54 crc kubenswrapper[4833]: I0219 13:00:54.703805 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-69h4r" Feb 19 13:01:03 crc kubenswrapper[4833]: I0219 13:01:03.190387 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8gfsk" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.508917 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k8mrp"] Feb 19 13:01:06 crc kubenswrapper[4833]: E0219 13:01:06.509459 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1b56c9-520d-42fc-8f48-0180dbde9c5e" containerName="registry-server" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.509471 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1b56c9-520d-42fc-8f48-0180dbde9c5e" containerName="registry-server" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.509618 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1b56c9-520d-42fc-8f48-0180dbde9c5e" containerName="registry-server" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.510419 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.526654 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8mrp"] Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.626661 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-utilities\") pod \"certified-operators-k8mrp\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.626829 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-catalog-content\") pod \"certified-operators-k8mrp\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.626908 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7525\" (UniqueName: \"kubernetes.io/projected/c60275a2-65bf-429d-9ef4-93213e37e890-kube-api-access-p7525\") pod \"certified-operators-k8mrp\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.728590 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-utilities\") pod \"certified-operators-k8mrp\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.728669 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-catalog-content\") pod \"certified-operators-k8mrp\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.728707 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7525\" (UniqueName: \"kubernetes.io/projected/c60275a2-65bf-429d-9ef4-93213e37e890-kube-api-access-p7525\") pod \"certified-operators-k8mrp\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.729180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-utilities\") pod \"certified-operators-k8mrp\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.729418 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-catalog-content\") pod \"certified-operators-k8mrp\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.753271 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7525\" (UniqueName: \"kubernetes.io/projected/c60275a2-65bf-429d-9ef4-93213e37e890-kube-api-access-p7525\") pod \"certified-operators-k8mrp\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:06 crc kubenswrapper[4833]: I0219 13:01:06.838474 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:07 crc kubenswrapper[4833]: I0219 13:01:07.121524 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k8mrp"] Feb 19 13:01:07 crc kubenswrapper[4833]: I0219 13:01:07.971114 4833 generic.go:334] "Generic (PLEG): container finished" podID="c60275a2-65bf-429d-9ef4-93213e37e890" containerID="ea3d67a28295ea038a057b85f3526618a241f3bc082ce24816ffbc1f846a1a87" exitCode=0 Feb 19 13:01:07 crc kubenswrapper[4833]: I0219 13:01:07.971194 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8mrp" event={"ID":"c60275a2-65bf-429d-9ef4-93213e37e890","Type":"ContainerDied","Data":"ea3d67a28295ea038a057b85f3526618a241f3bc082ce24816ffbc1f846a1a87"} Feb 19 13:01:07 crc kubenswrapper[4833]: I0219 13:01:07.971254 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8mrp" event={"ID":"c60275a2-65bf-429d-9ef4-93213e37e890","Type":"ContainerStarted","Data":"13fd56376e519f8e56e6b67c13aa5f840e50596cbd74ed47f04f66be0e688877"} Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.340583 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst"] Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.342152 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.345388 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7vl7z" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.355973 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst"] Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.464846 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-bundle\") pod \"05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.464884 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-288xv\" (UniqueName: \"kubernetes.io/projected/095e674e-7762-4748-896c-1e0b2dd9fbfc-kube-api-access-288xv\") pod \"05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.464958 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-util\") pod \"05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.566464 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-util\") pod \"05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.566576 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-bundle\") pod \"05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.566622 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-288xv\" (UniqueName: \"kubernetes.io/projected/095e674e-7762-4748-896c-1e0b2dd9fbfc-kube-api-access-288xv\") pod \"05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.567518 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-util\") pod \"05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.568224 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-bundle\") pod \"05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.588457 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-288xv\" (UniqueName: \"kubernetes.io/projected/095e674e-7762-4748-896c-1e0b2dd9fbfc-kube-api-access-288xv\") pod \"05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.664138 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.964742 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst"] Feb 19 13:01:08 crc kubenswrapper[4833]: I0219 13:01:08.981468 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8mrp" event={"ID":"c60275a2-65bf-429d-9ef4-93213e37e890","Type":"ContainerStarted","Data":"af0d53161f4b44b6fed4d3d9a7cdc519ce0cc97e21942dac918b553e09692a21"} Feb 19 13:01:09 crc kubenswrapper[4833]: I0219 13:01:09.989238 4833 generic.go:334] "Generic (PLEG): container finished" podID="c60275a2-65bf-429d-9ef4-93213e37e890" containerID="af0d53161f4b44b6fed4d3d9a7cdc519ce0cc97e21942dac918b553e09692a21" exitCode=0 Feb 19 13:01:09 crc kubenswrapper[4833]: I0219 13:01:09.989356 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8mrp" event={"ID":"c60275a2-65bf-429d-9ef4-93213e37e890","Type":"ContainerDied","Data":"af0d53161f4b44b6fed4d3d9a7cdc519ce0cc97e21942dac918b553e09692a21"} Feb 19 13:01:09 crc kubenswrapper[4833]: I0219 13:01:09.992268 4833 generic.go:334] "Generic (PLEG): container finished" podID="095e674e-7762-4748-896c-1e0b2dd9fbfc" containerID="8bdac5e4b98687956856605df2e1a54b81aa6612d874ff5be3d9f225cbc612f3" exitCode=0 Feb 19 13:01:09 crc kubenswrapper[4833]: I0219 13:01:09.992307 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" event={"ID":"095e674e-7762-4748-896c-1e0b2dd9fbfc","Type":"ContainerDied","Data":"8bdac5e4b98687956856605df2e1a54b81aa6612d874ff5be3d9f225cbc612f3"} Feb 19 13:01:09 crc kubenswrapper[4833]: I0219 13:01:09.992329 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" event={"ID":"095e674e-7762-4748-896c-1e0b2dd9fbfc","Type":"ContainerStarted","Data":"3c40d30df5313bc2c3aa86e39dadba4fcb279c84c689b1d067076a71268a2570"} Feb 19 13:01:11 crc kubenswrapper[4833]: I0219 13:01:10.999846 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8mrp" event={"ID":"c60275a2-65bf-429d-9ef4-93213e37e890","Type":"ContainerStarted","Data":"d34cb973f7672f284bdad5d09cb53af7617361c6f578f19c21433dba664b3b47"} Feb 19 13:01:11 crc kubenswrapper[4833]: I0219 13:01:11.001613 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" event={"ID":"095e674e-7762-4748-896c-1e0b2dd9fbfc","Type":"ContainerStarted","Data":"ea75e57a6f4215ad8b2c282d5cdac858733de148fd3e5b04e6d268b0f8e97a33"} Feb 19 13:01:12 crc kubenswrapper[4833]: I0219 13:01:12.012135 4833 generic.go:334] "Generic (PLEG): container finished" podID="095e674e-7762-4748-896c-1e0b2dd9fbfc" containerID="ea75e57a6f4215ad8b2c282d5cdac858733de148fd3e5b04e6d268b0f8e97a33" exitCode=0 Feb 19 13:01:12 crc kubenswrapper[4833]: I0219 13:01:12.012218 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" event={"ID":"095e674e-7762-4748-896c-1e0b2dd9fbfc","Type":"ContainerDied","Data":"ea75e57a6f4215ad8b2c282d5cdac858733de148fd3e5b04e6d268b0f8e97a33"} Feb 19 13:01:12 crc kubenswrapper[4833]: I0219 13:01:12.046369 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k8mrp" podStartSLOduration=3.258445291 podStartE2EDuration="6.046344321s" podCreationTimestamp="2026-02-19 13:01:06 +0000 UTC" firstStartedPulling="2026-02-19 13:01:07.972921398 +0000 UTC m=+878.368440186" lastFinishedPulling="2026-02-19 13:01:10.760820448 +0000 UTC m=+881.156339216" observedRunningTime="2026-02-19 13:01:12.041092125 +0000 UTC m=+882.436610893" watchObservedRunningTime="2026-02-19 13:01:12.046344321 +0000 UTC m=+882.441863099" Feb 19 13:01:12 crc kubenswrapper[4833]: E0219 13:01:12.452370 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod095e674e_7762_4748_896c_1e0b2dd9fbfc.slice/crio-conmon-10708983af96106d85bc6850eaf967d24fc2cc3576391604bc615b44534d4a99.scope\": RecentStats: unable to find data in memory cache]" Feb 19 13:01:13 crc kubenswrapper[4833]: I0219 13:01:13.024682 4833 generic.go:334] "Generic (PLEG): container finished" podID="095e674e-7762-4748-896c-1e0b2dd9fbfc" containerID="10708983af96106d85bc6850eaf967d24fc2cc3576391604bc615b44534d4a99" exitCode=0 Feb 19 13:01:13 crc kubenswrapper[4833]: I0219 13:01:13.024742 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" event={"ID":"095e674e-7762-4748-896c-1e0b2dd9fbfc","Type":"ContainerDied","Data":"10708983af96106d85bc6850eaf967d24fc2cc3576391604bc615b44534d4a99"} Feb 19 13:01:14 crc kubenswrapper[4833]: I0219 13:01:14.352288 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:14 crc kubenswrapper[4833]: I0219 13:01:14.445905 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-util\") pod \"095e674e-7762-4748-896c-1e0b2dd9fbfc\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " Feb 19 13:01:14 crc kubenswrapper[4833]: I0219 13:01:14.448674 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-288xv\" (UniqueName: \"kubernetes.io/projected/095e674e-7762-4748-896c-1e0b2dd9fbfc-kube-api-access-288xv\") pod \"095e674e-7762-4748-896c-1e0b2dd9fbfc\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " Feb 19 13:01:14 crc kubenswrapper[4833]: I0219 13:01:14.448773 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-bundle\") pod \"095e674e-7762-4748-896c-1e0b2dd9fbfc\" (UID: \"095e674e-7762-4748-896c-1e0b2dd9fbfc\") " Feb 19 13:01:14 crc kubenswrapper[4833]: I0219 13:01:14.452292 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-bundle" (OuterVolumeSpecName: "bundle") pod "095e674e-7762-4748-896c-1e0b2dd9fbfc" (UID: "095e674e-7762-4748-896c-1e0b2dd9fbfc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:01:14 crc kubenswrapper[4833]: I0219 13:01:14.457327 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095e674e-7762-4748-896c-1e0b2dd9fbfc-kube-api-access-288xv" (OuterVolumeSpecName: "kube-api-access-288xv") pod "095e674e-7762-4748-896c-1e0b2dd9fbfc" (UID: "095e674e-7762-4748-896c-1e0b2dd9fbfc"). InnerVolumeSpecName "kube-api-access-288xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:01:14 crc kubenswrapper[4833]: I0219 13:01:14.467077 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-util" (OuterVolumeSpecName: "util") pod "095e674e-7762-4748-896c-1e0b2dd9fbfc" (UID: "095e674e-7762-4748-896c-1e0b2dd9fbfc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:01:14 crc kubenswrapper[4833]: I0219 13:01:14.550897 4833 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-util\") on node \"crc\" DevicePath \"\"" Feb 19 13:01:14 crc kubenswrapper[4833]: I0219 13:01:14.550943 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-288xv\" (UniqueName: \"kubernetes.io/projected/095e674e-7762-4748-896c-1e0b2dd9fbfc-kube-api-access-288xv\") on node \"crc\" DevicePath \"\"" Feb 19 13:01:14 crc kubenswrapper[4833]: I0219 13:01:14.550955 4833 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/095e674e-7762-4748-896c-1e0b2dd9fbfc-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:01:15 crc kubenswrapper[4833]: I0219 13:01:15.045580 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" event={"ID":"095e674e-7762-4748-896c-1e0b2dd9fbfc","Type":"ContainerDied","Data":"3c40d30df5313bc2c3aa86e39dadba4fcb279c84c689b1d067076a71268a2570"} Feb 19 13:01:15 crc kubenswrapper[4833]: I0219 13:01:15.047857 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c40d30df5313bc2c3aa86e39dadba4fcb279c84c689b1d067076a71268a2570" Feb 19 13:01:15 crc kubenswrapper[4833]: I0219 13:01:15.047959 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst" Feb 19 13:01:16 crc kubenswrapper[4833]: I0219 13:01:16.839298 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:16 crc kubenswrapper[4833]: I0219 13:01:16.839944 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:16 crc kubenswrapper[4833]: I0219 13:01:16.884483 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:17 crc kubenswrapper[4833]: I0219 13:01:17.116194 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.116100 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5"] Feb 19 13:01:18 crc kubenswrapper[4833]: E0219 13:01:18.116351 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095e674e-7762-4748-896c-1e0b2dd9fbfc" containerName="pull" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.116362 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="095e674e-7762-4748-896c-1e0b2dd9fbfc" containerName="pull" Feb 19 13:01:18 crc kubenswrapper[4833]: E0219 13:01:18.116373 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095e674e-7762-4748-896c-1e0b2dd9fbfc" containerName="util" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.116379 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="095e674e-7762-4748-896c-1e0b2dd9fbfc" containerName="util" Feb 19 13:01:18 crc kubenswrapper[4833]: E0219 13:01:18.116390 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095e674e-7762-4748-896c-1e0b2dd9fbfc" containerName="extract" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.116397 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="095e674e-7762-4748-896c-1e0b2dd9fbfc" containerName="extract" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.116518 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="095e674e-7762-4748-896c-1e0b2dd9fbfc" containerName="extract" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.116883 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.119071 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-lv9fn" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.148180 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5"] Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.203388 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh5wg\" (UniqueName: \"kubernetes.io/projected/59087e2d-5038-44cd-ab4d-1d1340e51c75-kube-api-access-kh5wg\") pod \"openstack-operator-controller-init-568f98c69-t2vv5\" (UID: \"59087e2d-5038-44cd-ab4d-1d1340e51c75\") " pod="openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.304472 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh5wg\" (UniqueName: \"kubernetes.io/projected/59087e2d-5038-44cd-ab4d-1d1340e51c75-kube-api-access-kh5wg\") pod \"openstack-operator-controller-init-568f98c69-t2vv5\" (UID: \"59087e2d-5038-44cd-ab4d-1d1340e51c75\") " pod="openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.327383 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh5wg\" (UniqueName: \"kubernetes.io/projected/59087e2d-5038-44cd-ab4d-1d1340e51c75-kube-api-access-kh5wg\") pod \"openstack-operator-controller-init-568f98c69-t2vv5\" (UID: \"59087e2d-5038-44cd-ab4d-1d1340e51c75\") " pod="openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.436000 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5" Feb 19 13:01:18 crc kubenswrapper[4833]: I0219 13:01:18.948611 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5"] Feb 19 13:01:19 crc kubenswrapper[4833]: I0219 13:01:19.071213 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5" event={"ID":"59087e2d-5038-44cd-ab4d-1d1340e51c75","Type":"ContainerStarted","Data":"e9d5dbf21ded43fa4560c3a535e10f4bad4b5f1b584c63670d8330d999d928e3"} Feb 19 13:01:19 crc kubenswrapper[4833]: I0219 13:01:19.281216 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8mrp"] Feb 19 13:01:19 crc kubenswrapper[4833]: I0219 13:01:19.281547 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k8mrp" podUID="c60275a2-65bf-429d-9ef4-93213e37e890" containerName="registry-server" containerID="cri-o://d34cb973f7672f284bdad5d09cb53af7617361c6f578f19c21433dba664b3b47" gracePeriod=2 Feb 19 13:01:24 crc kubenswrapper[4833]: I0219 13:01:24.106415 4833 generic.go:334] "Generic (PLEG): container finished" podID="c60275a2-65bf-429d-9ef4-93213e37e890" containerID="d34cb973f7672f284bdad5d09cb53af7617361c6f578f19c21433dba664b3b47" exitCode=0 Feb 19 13:01:24 crc kubenswrapper[4833]: I0219 13:01:24.106522 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8mrp" event={"ID":"c60275a2-65bf-429d-9ef4-93213e37e890","Type":"ContainerDied","Data":"d34cb973f7672f284bdad5d09cb53af7617361c6f578f19c21433dba664b3b47"} Feb 19 13:01:25 crc kubenswrapper[4833]: I0219 13:01:25.192740 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:25 crc kubenswrapper[4833]: I0219 13:01:25.295904 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-catalog-content\") pod \"c60275a2-65bf-429d-9ef4-93213e37e890\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " Feb 19 13:01:25 crc kubenswrapper[4833]: I0219 13:01:25.295974 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-utilities\") pod \"c60275a2-65bf-429d-9ef4-93213e37e890\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " Feb 19 13:01:25 crc kubenswrapper[4833]: I0219 13:01:25.296020 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7525\" (UniqueName: \"kubernetes.io/projected/c60275a2-65bf-429d-9ef4-93213e37e890-kube-api-access-p7525\") pod \"c60275a2-65bf-429d-9ef4-93213e37e890\" (UID: \"c60275a2-65bf-429d-9ef4-93213e37e890\") " Feb 19 13:01:25 crc kubenswrapper[4833]: I0219 13:01:25.297006 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-utilities" (OuterVolumeSpecName: "utilities") pod "c60275a2-65bf-429d-9ef4-93213e37e890" (UID: "c60275a2-65bf-429d-9ef4-93213e37e890"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:01:25 crc kubenswrapper[4833]: I0219 13:01:25.301741 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c60275a2-65bf-429d-9ef4-93213e37e890-kube-api-access-p7525" (OuterVolumeSpecName: "kube-api-access-p7525") pod "c60275a2-65bf-429d-9ef4-93213e37e890" (UID: "c60275a2-65bf-429d-9ef4-93213e37e890"). InnerVolumeSpecName "kube-api-access-p7525". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:01:25 crc kubenswrapper[4833]: I0219 13:01:25.397139 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:01:25 crc kubenswrapper[4833]: I0219 13:01:25.397179 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7525\" (UniqueName: \"kubernetes.io/projected/c60275a2-65bf-429d-9ef4-93213e37e890-kube-api-access-p7525\") on node \"crc\" DevicePath \"\"" Feb 19 13:01:25 crc kubenswrapper[4833]: I0219 13:01:25.645770 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c60275a2-65bf-429d-9ef4-93213e37e890" (UID: "c60275a2-65bf-429d-9ef4-93213e37e890"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:01:25 crc kubenswrapper[4833]: I0219 13:01:25.701526 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c60275a2-65bf-429d-9ef4-93213e37e890-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:01:26 crc kubenswrapper[4833]: I0219 13:01:26.125754 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k8mrp" event={"ID":"c60275a2-65bf-429d-9ef4-93213e37e890","Type":"ContainerDied","Data":"13fd56376e519f8e56e6b67c13aa5f840e50596cbd74ed47f04f66be0e688877"} Feb 19 13:01:26 crc kubenswrapper[4833]: I0219 13:01:26.125843 4833 scope.go:117] "RemoveContainer" containerID="d34cb973f7672f284bdad5d09cb53af7617361c6f578f19c21433dba664b3b47" Feb 19 13:01:26 crc kubenswrapper[4833]: I0219 13:01:26.125847 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k8mrp" Feb 19 13:01:26 crc kubenswrapper[4833]: I0219 13:01:26.163581 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k8mrp"] Feb 19 13:01:26 crc kubenswrapper[4833]: I0219 13:01:26.164711 4833 scope.go:117] "RemoveContainer" containerID="af0d53161f4b44b6fed4d3d9a7cdc519ce0cc97e21942dac918b553e09692a21" Feb 19 13:01:26 crc kubenswrapper[4833]: I0219 13:01:26.169385 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k8mrp"] Feb 19 13:01:26 crc kubenswrapper[4833]: I0219 13:01:26.196923 4833 scope.go:117] "RemoveContainer" containerID="ea3d67a28295ea038a057b85f3526618a241f3bc082ce24816ffbc1f846a1a87" Feb 19 13:01:26 crc kubenswrapper[4833]: I0219 13:01:26.324795 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c60275a2-65bf-429d-9ef4-93213e37e890" path="/var/lib/kubelet/pods/c60275a2-65bf-429d-9ef4-93213e37e890/volumes" Feb 19 13:01:31 crc kubenswrapper[4833]: I0219 13:01:31.156664 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5" event={"ID":"59087e2d-5038-44cd-ab4d-1d1340e51c75","Type":"ContainerStarted","Data":"c728f7e51b46e8e933b8040776708d020c6a7f6d0caddd061cb06c727b31550c"} Feb 19 13:01:31 crc kubenswrapper[4833]: I0219 13:01:31.157420 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5" Feb 19 13:01:31 crc kubenswrapper[4833]: I0219 13:01:31.190601 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5" podStartSLOduration=1.543124171 podStartE2EDuration="13.190579432s" podCreationTimestamp="2026-02-19 13:01:18 +0000 UTC" firstStartedPulling="2026-02-19 13:01:18.956666251 +0000 UTC m=+889.352185019" lastFinishedPulling="2026-02-19 13:01:30.604121502 +0000 UTC m=+900.999640280" observedRunningTime="2026-02-19 13:01:31.186951468 +0000 UTC m=+901.582470246" watchObservedRunningTime="2026-02-19 13:01:31.190579432 +0000 UTC m=+901.586098210" Feb 19 13:01:38 crc kubenswrapper[4833]: I0219 13:01:38.440843 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-568f98c69-t2vv5" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.528603 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd"] Feb 19 13:02:01 crc kubenswrapper[4833]: E0219 13:02:01.529485 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60275a2-65bf-429d-9ef4-93213e37e890" containerName="registry-server" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.529526 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60275a2-65bf-429d-9ef4-93213e37e890" containerName="registry-server" Feb 19 13:02:01 crc kubenswrapper[4833]: E0219 13:02:01.529542 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60275a2-65bf-429d-9ef4-93213e37e890" containerName="extract-utilities" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.529550 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60275a2-65bf-429d-9ef4-93213e37e890" containerName="extract-utilities" Feb 19 13:02:01 crc kubenswrapper[4833]: E0219 13:02:01.529568 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c60275a2-65bf-429d-9ef4-93213e37e890" containerName="extract-content" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.529576 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c60275a2-65bf-429d-9ef4-93213e37e890" containerName="extract-content" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.529721 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c60275a2-65bf-429d-9ef4-93213e37e890" containerName="registry-server" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.530229 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.532555 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-sv8pn" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.536250 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.537737 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.539354 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ckwjv" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.544591 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.549574 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.566328 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.567436 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.569483 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mfpmh" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.599133 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.619489 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hq98q"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.629208 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hq98q" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.635775 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-qz2td" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.692617 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.694377 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.699656 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hq98q"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.704774 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.705846 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-tmtlh" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.717144 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.718360 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.720216 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w95dm\" (UniqueName: \"kubernetes.io/projected/c960bafe-e1ce-4635-a849-758a84db3b0e-kube-api-access-w95dm\") pod \"glance-operator-controller-manager-77987464f4-hq98q\" (UID: \"c960bafe-e1ce-4635-a849-758a84db3b0e\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hq98q" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.720268 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm44b\" (UniqueName: \"kubernetes.io/projected/dd33e5e9-4983-4954-966e-a693cc5c299b-kube-api-access-gm44b\") pod \"designate-operator-controller-manager-6d8bf5c495-ztqm7\" (UID: \"dd33e5e9-4983-4954-966e-a693cc5c299b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.720322 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79pm8\" (UniqueName: \"kubernetes.io/projected/1ba8dd89-0865-4766-b216-b906d4d6f77a-kube-api-access-79pm8\") pod \"cinder-operator-controller-manager-5d946d989d-hn9cd\" (UID: \"1ba8dd89-0865-4766-b216-b906d4d6f77a\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.720353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jmnd\" (UniqueName: \"kubernetes.io/projected/4b94f9da-5e45-4428-a709-24574552d77e-kube-api-access-6jmnd\") pod \"barbican-operator-controller-manager-868647ff47-g59gc\" (UID: \"4b94f9da-5e45-4428-a709-24574552d77e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.720744 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-r2zjf" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.728146 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.731302 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.738512 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qnltf" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.739728 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.744149 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.753767 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.759805 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.761035 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.765728 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-vcj77" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.776162 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.777326 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.779075 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gnvjd" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.784407 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.794761 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.801897 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.803773 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.808826 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8tzsn" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.815611 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.816782 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.819124 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-dcj2w" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.823253 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.824319 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.831868 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-fdwh4" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.833016 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.833196 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w95dm\" (UniqueName: \"kubernetes.io/projected/c960bafe-e1ce-4635-a849-758a84db3b0e-kube-api-access-w95dm\") pod \"glance-operator-controller-manager-77987464f4-hq98q\" (UID: \"c960bafe-e1ce-4635-a849-758a84db3b0e\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hq98q" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.833333 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gm44b\" (UniqueName: \"kubernetes.io/projected/dd33e5e9-4983-4954-966e-a693cc5c299b-kube-api-access-gm44b\") pod \"designate-operator-controller-manager-6d8bf5c495-ztqm7\" (UID: \"dd33e5e9-4983-4954-966e-a693cc5c299b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.833490 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79pm8\" (UniqueName: \"kubernetes.io/projected/1ba8dd89-0865-4766-b216-b906d4d6f77a-kube-api-access-79pm8\") pod \"cinder-operator-controller-manager-5d946d989d-hn9cd\" (UID: \"1ba8dd89-0865-4766-b216-b906d4d6f77a\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.833601 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qgxb\" (UniqueName: \"kubernetes.io/projected/0168dd3a-5296-440d-8b46-d858da1cfeb6-kube-api-access-5qgxb\") pod \"horizon-operator-controller-manager-5b9b8895d5-ng9mx\" (UID: \"0168dd3a-5296-440d-8b46-d858da1cfeb6\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.833681 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh5cc\" (UniqueName: \"kubernetes.io/projected/ab70788d-b168-497b-bea0-4847ee80ce73-kube-api-access-hh5cc\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.833877 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8qck\" (UniqueName: \"kubernetes.io/projected/70feab77-0665-499a-b6e2-b35b95384ab7-kube-api-access-l8qck\") pod \"heat-operator-controller-manager-69f49c598c-wthd9\" (UID: \"70feab77-0665-499a-b6e2-b35b95384ab7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.834371 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jmnd\" (UniqueName: \"kubernetes.io/projected/4b94f9da-5e45-4428-a709-24574552d77e-kube-api-access-6jmnd\") pod \"barbican-operator-controller-manager-868647ff47-g59gc\" (UID: \"4b94f9da-5e45-4428-a709-24574552d77e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.845031 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.863102 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.864076 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.866327 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.872879 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.882898 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm44b\" (UniqueName: \"kubernetes.io/projected/dd33e5e9-4983-4954-966e-a693cc5c299b-kube-api-access-gm44b\") pod \"designate-operator-controller-manager-6d8bf5c495-ztqm7\" (UID: \"dd33e5e9-4983-4954-966e-a693cc5c299b\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.883712 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2wjtt" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.886881 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jmnd\" (UniqueName: \"kubernetes.io/projected/4b94f9da-5e45-4428-a709-24574552d77e-kube-api-access-6jmnd\") pod \"barbican-operator-controller-manager-868647ff47-g59gc\" (UID: \"4b94f9da-5e45-4428-a709-24574552d77e\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.888241 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.905950 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79pm8\" (UniqueName: \"kubernetes.io/projected/1ba8dd89-0865-4766-b216-b906d4d6f77a-kube-api-access-79pm8\") pod \"cinder-operator-controller-manager-5d946d989d-hn9cd\" (UID: \"1ba8dd89-0865-4766-b216-b906d4d6f77a\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.911136 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w95dm\" (UniqueName: \"kubernetes.io/projected/c960bafe-e1ce-4635-a849-758a84db3b0e-kube-api-access-w95dm\") pod \"glance-operator-controller-manager-77987464f4-hq98q\" (UID: \"c960bafe-e1ce-4635-a849-758a84db3b0e\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-hq98q" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.911732 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.934294 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.935067 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.936467 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpnz7\" (UniqueName: \"kubernetes.io/projected/a14096df-2211-4053-afb4-ad8d68ff0723-kube-api-access-xpnz7\") pod \"manila-operator-controller-manager-54f6768c69-njc9d\" (UID: \"a14096df-2211-4053-afb4-ad8d68ff0723\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.936820 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tddlf\" (UniqueName: \"kubernetes.io/projected/eed2b359-6b1f-4db4-947a-6ed3bf4385cc-kube-api-access-tddlf\") pod \"ironic-operator-controller-manager-554564d7fc-7d2vx\" (UID: \"eed2b359-6b1f-4db4-947a-6ed3bf4385cc\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.936981 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-gk976" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.937358 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.937594 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jwzk\" (UniqueName: \"kubernetes.io/projected/e6de77c2-2965-48a3-a79a-75539ca32b8b-kube-api-access-6jwzk\") pod \"mariadb-operator-controller-manager-6994f66f48-7n7vf\" (UID: \"e6de77c2-2965-48a3-a79a-75539ca32b8b\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.937715 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-228t6\" (UniqueName: \"kubernetes.io/projected/eaf010f7-5113-4970-b963-682d17243fc9-kube-api-access-228t6\") pod \"neutron-operator-controller-manager-64ddbf8bb-hprlt\" (UID: \"eaf010f7-5113-4970-b963-682d17243fc9\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.937798 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d8nr\" (UniqueName: \"kubernetes.io/projected/a09fe0a0-c328-4306-b1de-c8bddc00378f-kube-api-access-4d8nr\") pod \"keystone-operator-controller-manager-b4d948c87-8wsws\" (UID: \"a09fe0a0-c328-4306-b1de-c8bddc00378f\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.937897 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qgxb\" (UniqueName: \"kubernetes.io/projected/0168dd3a-5296-440d-8b46-d858da1cfeb6-kube-api-access-5qgxb\") pod \"horizon-operator-controller-manager-5b9b8895d5-ng9mx\" (UID: \"0168dd3a-5296-440d-8b46-d858da1cfeb6\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.937977 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh5cc\" (UniqueName: \"kubernetes.io/projected/ab70788d-b168-497b-bea0-4847ee80ce73-kube-api-access-hh5cc\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.938049 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8qck\" (UniqueName: \"kubernetes.io/projected/70feab77-0665-499a-b6e2-b35b95384ab7-kube-api-access-l8qck\") pod \"heat-operator-controller-manager-69f49c598c-wthd9\" (UID: \"70feab77-0665-499a-b6e2-b35b95384ab7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9" Feb 19 13:02:01 crc kubenswrapper[4833]: E0219 13:02:01.937559 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:01 crc kubenswrapper[4833]: E0219 13:02:01.938439 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert podName:ab70788d-b168-497b-bea0-4847ee80ce73 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:02.43841585 +0000 UTC m=+932.833934618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert") pod "infra-operator-controller-manager-79d975b745-cvzzp" (UID: "ab70788d-b168-497b-bea0-4847ee80ce73") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.965797 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9"] Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.980106 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8qck\" (UniqueName: \"kubernetes.io/projected/70feab77-0665-499a-b6e2-b35b95384ab7-kube-api-access-l8qck\") pod \"heat-operator-controller-manager-69f49c598c-wthd9\" (UID: \"70feab77-0665-499a-b6e2-b35b95384ab7\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9" Feb 19 13:02:01 crc kubenswrapper[4833]: I0219 13:02:01.985033 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh5cc\" (UniqueName: \"kubernetes.io/projected/ab70788d-b168-497b-bea0-4847ee80ce73-kube-api-access-hh5cc\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.001018 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qgxb\" (UniqueName: \"kubernetes.io/projected/0168dd3a-5296-440d-8b46-d858da1cfeb6-kube-api-access-5qgxb\") pod \"horizon-operator-controller-manager-5b9b8895d5-ng9mx\" (UID: \"0168dd3a-5296-440d-8b46-d858da1cfeb6\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.001289 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hq98q" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.005467 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.006852 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.011035 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-v2dk4" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.011237 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.018033 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.019079 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.023050 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-tfq7r" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.025726 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.036572 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.037100 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.039390 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpnz7\" (UniqueName: \"kubernetes.io/projected/a14096df-2211-4053-afb4-ad8d68ff0723-kube-api-access-xpnz7\") pod \"manila-operator-controller-manager-54f6768c69-njc9d\" (UID: \"a14096df-2211-4053-afb4-ad8d68ff0723\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.039423 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tddlf\" (UniqueName: \"kubernetes.io/projected/eed2b359-6b1f-4db4-947a-6ed3bf4385cc-kube-api-access-tddlf\") pod \"ironic-operator-controller-manager-554564d7fc-7d2vx\" (UID: \"eed2b359-6b1f-4db4-947a-6ed3bf4385cc\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.039471 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txj29\" (UniqueName: \"kubernetes.io/projected/8df5aecb-140d-4845-b07c-ab75586e4b54-kube-api-access-txj29\") pod \"nova-operator-controller-manager-567668f5cf-w4n4c\" (UID: \"8df5aecb-140d-4845-b07c-ab75586e4b54\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.039495 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jwzk\" (UniqueName: \"kubernetes.io/projected/e6de77c2-2965-48a3-a79a-75539ca32b8b-kube-api-access-6jwzk\") pod \"mariadb-operator-controller-manager-6994f66f48-7n7vf\" (UID: \"e6de77c2-2965-48a3-a79a-75539ca32b8b\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.039525 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jckfg\" (UniqueName: \"kubernetes.io/projected/4deeacce-2501-4276-98cf-cb615e0b4dce-kube-api-access-jckfg\") pod \"octavia-operator-controller-manager-69f8888797-7z4m9\" (UID: \"4deeacce-2501-4276-98cf-cb615e0b4dce\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.039563 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-228t6\" (UniqueName: \"kubernetes.io/projected/eaf010f7-5113-4970-b963-682d17243fc9-kube-api-access-228t6\") pod \"neutron-operator-controller-manager-64ddbf8bb-hprlt\" (UID: \"eaf010f7-5113-4970-b963-682d17243fc9\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.039584 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d8nr\" (UniqueName: \"kubernetes.io/projected/a09fe0a0-c328-4306-b1de-c8bddc00378f-kube-api-access-4d8nr\") pod \"keystone-operator-controller-manager-b4d948c87-8wsws\" (UID: \"a09fe0a0-c328-4306-b1de-c8bddc00378f\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.050119 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.061277 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.062389 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.065096 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-xt4jw" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.066611 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jwzk\" (UniqueName: \"kubernetes.io/projected/e6de77c2-2965-48a3-a79a-75539ca32b8b-kube-api-access-6jwzk\") pod \"mariadb-operator-controller-manager-6994f66f48-7n7vf\" (UID: \"e6de77c2-2965-48a3-a79a-75539ca32b8b\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.073767 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.075223 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpnz7\" (UniqueName: \"kubernetes.io/projected/a14096df-2211-4053-afb4-ad8d68ff0723-kube-api-access-xpnz7\") pod \"manila-operator-controller-manager-54f6768c69-njc9d\" (UID: \"a14096df-2211-4053-afb4-ad8d68ff0723\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.078713 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.082018 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-zmsck" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.082310 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-228t6\" (UniqueName: \"kubernetes.io/projected/eaf010f7-5113-4970-b963-682d17243fc9-kube-api-access-228t6\") pod \"neutron-operator-controller-manager-64ddbf8bb-hprlt\" (UID: \"eaf010f7-5113-4970-b963-682d17243fc9\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.088607 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tddlf\" (UniqueName: \"kubernetes.io/projected/eed2b359-6b1f-4db4-947a-6ed3bf4385cc-kube-api-access-tddlf\") pod \"ironic-operator-controller-manager-554564d7fc-7d2vx\" (UID: \"eed2b359-6b1f-4db4-947a-6ed3bf4385cc\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.115465 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d8nr\" (UniqueName: \"kubernetes.io/projected/a09fe0a0-c328-4306-b1de-c8bddc00378f-kube-api-access-4d8nr\") pod \"keystone-operator-controller-manager-b4d948c87-8wsws\" (UID: \"a09fe0a0-c328-4306-b1de-c8bddc00378f\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.118603 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.134297 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.141585 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r5hs\" (UniqueName: \"kubernetes.io/projected/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-kube-api-access-5r5hs\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.141745 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.141888 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txj29\" (UniqueName: \"kubernetes.io/projected/8df5aecb-140d-4845-b07c-ab75586e4b54-kube-api-access-txj29\") pod \"nova-operator-controller-manager-567668f5cf-w4n4c\" (UID: \"8df5aecb-140d-4845-b07c-ab75586e4b54\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.141959 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jckfg\" (UniqueName: \"kubernetes.io/projected/4deeacce-2501-4276-98cf-cb615e0b4dce-kube-api-access-jckfg\") pod \"octavia-operator-controller-manager-69f8888797-7z4m9\" (UID: \"4deeacce-2501-4276-98cf-cb615e0b4dce\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.141988 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79vj2\" (UniqueName: \"kubernetes.io/projected/eddad8e9-ebc8-4772-9b30-76fc7bd09919-kube-api-access-79vj2\") pod \"ovn-operator-controller-manager-d44cf6b75-bbl2n\" (UID: \"eddad8e9-ebc8-4772-9b30-76fc7bd09919\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.142474 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.143931 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.155752 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.157893 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.166539 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txj29\" (UniqueName: \"kubernetes.io/projected/8df5aecb-140d-4845-b07c-ab75586e4b54-kube-api-access-txj29\") pod \"nova-operator-controller-manager-567668f5cf-w4n4c\" (UID: \"8df5aecb-140d-4845-b07c-ab75586e4b54\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.166855 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.176132 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.177121 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.179461 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jckfg\" (UniqueName: \"kubernetes.io/projected/4deeacce-2501-4276-98cf-cb615e0b4dce-kube-api-access-jckfg\") pod \"octavia-operator-controller-manager-69f8888797-7z4m9\" (UID: \"4deeacce-2501-4276-98cf-cb615e0b4dce\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.181549 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.186653 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-sbhfd" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.200583 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.229164 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-cn6cv"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.230110 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.232785 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-m9s2d" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.243662 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79vj2\" (UniqueName: \"kubernetes.io/projected/eddad8e9-ebc8-4772-9b30-76fc7bd09919-kube-api-access-79vj2\") pod \"ovn-operator-controller-manager-d44cf6b75-bbl2n\" (UID: \"eddad8e9-ebc8-4772-9b30-76fc7bd09919\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.243715 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r5hs\" (UniqueName: \"kubernetes.io/projected/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-kube-api-access-5r5hs\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.243768 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmb7\" (UniqueName: \"kubernetes.io/projected/636db3e6-7c84-4f25-896e-e3a542bdff19-kube-api-access-cmmb7\") pod \"swift-operator-controller-manager-68f46476f-cn5hb\" (UID: \"636db3e6-7c84-4f25-896e-e3a542bdff19\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.243792 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srst8\" (UniqueName: \"kubernetes.io/projected/3a2db5f5-bbec-4673-b32b-eef31c488a12-kube-api-access-srst8\") pod \"placement-operator-controller-manager-8497b45c89-hrnmv\" (UID: \"3a2db5f5-bbec-4673-b32b-eef31c488a12\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.243815 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:02 crc kubenswrapper[4833]: E0219 13:02:02.243920 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:02 crc kubenswrapper[4833]: E0219 13:02:02.243964 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert podName:a8783b50-8a5e-4c9f-8f4b-513e4e0c7122 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:02.74395021 +0000 UTC m=+933.139468978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" (UID: "a8783b50-8a5e-4c9f-8f4b-513e4e0c7122") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.246554 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-cn6cv"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.260004 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r5hs\" (UniqueName: \"kubernetes.io/projected/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-kube-api-access-5r5hs\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.262158 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79vj2\" (UniqueName: \"kubernetes.io/projected/eddad8e9-ebc8-4772-9b30-76fc7bd09919-kube-api-access-79vj2\") pod \"ovn-operator-controller-manager-d44cf6b75-bbl2n\" (UID: \"eddad8e9-ebc8-4772-9b30-76fc7bd09919\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.275100 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.275909 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.277803 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-lsxkh" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.280129 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.293911 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.294793 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.297570 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.297588 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.297667 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bv8h4" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.329903 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.338664 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.352549 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.352941 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmmb7\" (UniqueName: \"kubernetes.io/projected/636db3e6-7c84-4f25-896e-e3a542bdff19-kube-api-access-cmmb7\") pod \"swift-operator-controller-manager-68f46476f-cn5hb\" (UID: \"636db3e6-7c84-4f25-896e-e3a542bdff19\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.353011 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srst8\" (UniqueName: \"kubernetes.io/projected/3a2db5f5-bbec-4673-b32b-eef31c488a12-kube-api-access-srst8\") pod \"placement-operator-controller-manager-8497b45c89-hrnmv\" (UID: \"3a2db5f5-bbec-4673-b32b-eef31c488a12\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.353118 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br7m9\" (UniqueName: \"kubernetes.io/projected/84b0c5a7-e111-4ee9-999b-5da00d00ffd0-kube-api-access-br7m9\") pod \"test-operator-controller-manager-7866795846-cn6cv\" (UID: \"84b0c5a7-e111-4ee9-999b-5da00d00ffd0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.353180 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsrqc\" (UniqueName: \"kubernetes.io/projected/e0e6cafc-957b-4ebd-ad08-1bef03debe49-kube-api-access-qsrqc\") pod \"telemetry-operator-controller-manager-7f45b4ff68-lw9pq\" (UID: \"e0e6cafc-957b-4ebd-ad08-1bef03debe49\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.360696 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.361807 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.375333 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq"] Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.377395 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srst8\" (UniqueName: \"kubernetes.io/projected/3a2db5f5-bbec-4673-b32b-eef31c488a12-kube-api-access-srst8\") pod \"placement-operator-controller-manager-8497b45c89-hrnmv\" (UID: \"3a2db5f5-bbec-4673-b32b-eef31c488a12\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.378069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmmb7\" (UniqueName: \"kubernetes.io/projected/636db3e6-7c84-4f25-896e-e3a542bdff19-kube-api-access-cmmb7\") pod \"swift-operator-controller-manager-68f46476f-cn5hb\" (UID: \"636db3e6-7c84-4f25-896e-e3a542bdff19\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.378815 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-mqd4w" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.380314 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.380411 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.392860 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.396529 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.454581 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.454677 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br7m9\" (UniqueName: \"kubernetes.io/projected/84b0c5a7-e111-4ee9-999b-5da00d00ffd0-kube-api-access-br7m9\") pod \"test-operator-controller-manager-7866795846-cn6cv\" (UID: \"84b0c5a7-e111-4ee9-999b-5da00d00ffd0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.454743 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv6bw\" (UniqueName: \"kubernetes.io/projected/1e12420e-fd8b-4ef2-bc12-9b3be0efa58a-kube-api-access-rv6bw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zwzkq\" (UID: \"1e12420e-fd8b-4ef2-bc12-9b3be0efa58a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.454770 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsrqc\" (UniqueName: \"kubernetes.io/projected/e0e6cafc-957b-4ebd-ad08-1bef03debe49-kube-api-access-qsrqc\") pod \"telemetry-operator-controller-manager-7f45b4ff68-lw9pq\" (UID: \"e0e6cafc-957b-4ebd-ad08-1bef03debe49\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.454792 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdwlh\" (UniqueName: \"kubernetes.io/projected/81d2c5dc-91fd-4135-8408-104fc7badb60-kube-api-access-xdwlh\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.454840 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcz6d\" (UniqueName: \"kubernetes.io/projected/d7b1ebb3-ea0b-4e2f-b27a-e77abee17693-kube-api-access-bcz6d\") pod \"watcher-operator-controller-manager-5db88f68c-24cxm\" (UID: \"d7b1ebb3-ea0b-4e2f-b27a-e77abee17693\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.454857 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.454922 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:02 crc kubenswrapper[4833]: E0219 13:02:02.456722 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:02 crc kubenswrapper[4833]: E0219 13:02:02.456786 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert podName:ab70788d-b168-497b-bea0-4847ee80ce73 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:03.456765238 +0000 UTC m=+933.852284086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert") pod "infra-operator-controller-manager-79d975b745-cvzzp" (UID: "ab70788d-b168-497b-bea0-4847ee80ce73") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.477572 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br7m9\" (UniqueName: \"kubernetes.io/projected/84b0c5a7-e111-4ee9-999b-5da00d00ffd0-kube-api-access-br7m9\") pod \"test-operator-controller-manager-7866795846-cn6cv\" (UID: \"84b0c5a7-e111-4ee9-999b-5da00d00ffd0\") " pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.479700 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsrqc\" (UniqueName: \"kubernetes.io/projected/e0e6cafc-957b-4ebd-ad08-1bef03debe49-kube-api-access-qsrqc\") pod \"telemetry-operator-controller-manager-7f45b4ff68-lw9pq\" (UID: \"e0e6cafc-957b-4ebd-ad08-1bef03debe49\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.497336 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.548931 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.557673 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.557758 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv6bw\" (UniqueName: \"kubernetes.io/projected/1e12420e-fd8b-4ef2-bc12-9b3be0efa58a-kube-api-access-rv6bw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zwzkq\" (UID: \"1e12420e-fd8b-4ef2-bc12-9b3be0efa58a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.557796 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdwlh\" (UniqueName: \"kubernetes.io/projected/81d2c5dc-91fd-4135-8408-104fc7badb60-kube-api-access-xdwlh\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.557839 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcz6d\" (UniqueName: \"kubernetes.io/projected/d7b1ebb3-ea0b-4e2f-b27a-e77abee17693-kube-api-access-bcz6d\") pod \"watcher-operator-controller-manager-5db88f68c-24cxm\" (UID: \"d7b1ebb3-ea0b-4e2f-b27a-e77abee17693\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.557872 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:02 crc kubenswrapper[4833]: E0219 13:02:02.558030 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:02:02 crc kubenswrapper[4833]: E0219 13:02:02.558088 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:03.058068281 +0000 UTC m=+933.453587049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "webhook-server-cert" not found Feb 19 13:02:02 crc kubenswrapper[4833]: E0219 13:02:02.558140 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:02:02 crc kubenswrapper[4833]: E0219 13:02:02.558207 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:03.058188914 +0000 UTC m=+933.453707682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "metrics-server-cert" not found Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.580129 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdwlh\" (UniqueName: \"kubernetes.io/projected/81d2c5dc-91fd-4135-8408-104fc7badb60-kube-api-access-xdwlh\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.580398 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv6bw\" (UniqueName: \"kubernetes.io/projected/1e12420e-fd8b-4ef2-bc12-9b3be0efa58a-kube-api-access-rv6bw\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zwzkq\" (UID: \"1e12420e-fd8b-4ef2-bc12-9b3be0efa58a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.585735 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcz6d\" (UniqueName: \"kubernetes.io/projected/d7b1ebb3-ea0b-4e2f-b27a-e77abee17693-kube-api-access-bcz6d\") pod \"watcher-operator-controller-manager-5db88f68c-24cxm\" (UID: \"d7b1ebb3-ea0b-4e2f-b27a-e77abee17693\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.594326 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.712418 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq" Feb 19 13:02:02 crc kubenswrapper[4833]: I0219 13:02:02.761082 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:02 crc kubenswrapper[4833]: E0219 13:02:02.761238 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:02 crc kubenswrapper[4833]: E0219 13:02:02.761287 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert podName:a8783b50-8a5e-4c9f-8f4b-513e4e0c7122 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:03.761273513 +0000 UTC m=+934.156792281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" (UID: "a8783b50-8a5e-4c9f-8f4b-513e4e0c7122") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.023029 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7"] Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.030208 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-hq98q"] Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.066666 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.066774 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.066907 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.066956 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:04.066939026 +0000 UTC m=+934.462457794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "webhook-server-cert" not found Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.067576 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.067612 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:04.067600992 +0000 UTC m=+934.463119760 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "metrics-server-cert" not found Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.162552 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc"] Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.168376 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd"] Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.185594 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c"] Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.212153 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf"] Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.311976 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9"] Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.321147 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt"] Feb 19 13:02:03 crc kubenswrapper[4833]: W0219 13:02:03.327697 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaf010f7_5113_4970_b963_682d17243fc9.slice/crio-6212f6665c71e69ddd3144bffa9016e7041fffbd39c768393f7007acb0d6a501 WatchSource:0}: Error finding container 6212f6665c71e69ddd3144bffa9016e7041fffbd39c768393f7007acb0d6a501: Status 404 returned error can't find the container with id 6212f6665c71e69ddd3144bffa9016e7041fffbd39c768393f7007acb0d6a501 Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.329659 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9"] Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.335599 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d"] Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.383793 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d" event={"ID":"a14096df-2211-4053-afb4-ad8d68ff0723","Type":"ContainerStarted","Data":"e233e7610ea50711841cc3508d1ed6485d2921b841fd38d505c9a6f75728afe9"} Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.384635 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt" event={"ID":"eaf010f7-5113-4970-b963-682d17243fc9","Type":"ContainerStarted","Data":"6212f6665c71e69ddd3144bffa9016e7041fffbd39c768393f7007acb0d6a501"} Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.385992 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7" event={"ID":"dd33e5e9-4983-4954-966e-a693cc5c299b","Type":"ContainerStarted","Data":"47ff3b1ad46ca0bf3f1c0a911369a220a559b2fa3d5b6631fe88563203ffa37f"} Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.387021 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9" event={"ID":"70feab77-0665-499a-b6e2-b35b95384ab7","Type":"ContainerStarted","Data":"f97e1984ada6e97a804a7c119f0424f3a2aa8347de4242113a2d7c8d0c97faeb"} Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.387995 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9" event={"ID":"4deeacce-2501-4276-98cf-cb615e0b4dce","Type":"ContainerStarted","Data":"5172eb8a4f9250d1900b3fedc5c53b6f9fb8298120186a9ca33d6e3e511aae08"} Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.389835 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hq98q" event={"ID":"c960bafe-e1ce-4635-a849-758a84db3b0e","Type":"ContainerStarted","Data":"ed0efadc2fc2403d606e2af7b27177ebdb0095e97504efa5da4786b2dadc99cd"} Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.390683 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf" event={"ID":"e6de77c2-2965-48a3-a79a-75539ca32b8b","Type":"ContainerStarted","Data":"df5a360296055034556fa95b2bd47496ea31097b295fbf049561675bbe9af04e"} Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.391753 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c" event={"ID":"8df5aecb-140d-4845-b07c-ab75586e4b54","Type":"ContainerStarted","Data":"c0c9d351cda954223b6aafce0628f12c60659050e8d2dea10ec9f4ba441c068a"} Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.393137 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd" event={"ID":"1ba8dd89-0865-4766-b216-b906d4d6f77a","Type":"ContainerStarted","Data":"3b3fd3b06507b1ec95b7c5462ba757ee82df4afe301b58266a4d318b618f4d4d"} Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.394281 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc" event={"ID":"4b94f9da-5e45-4428-a709-24574552d77e","Type":"ContainerStarted","Data":"28d6a02cb533575a4b0d05f27c81bc6c9b558c2a1d6bf5de233c21e63c10ede9"} Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.473212 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.473403 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.473449 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert podName:ab70788d-b168-497b-bea0-4847ee80ce73 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:05.473433031 +0000 UTC m=+935.868951799 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert") pod "infra-operator-controller-manager-79d975b745-cvzzp" (UID: "ab70788d-b168-497b-bea0-4847ee80ce73") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:03 crc kubenswrapper[4833]: W0219 13:02:03.521766 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod636db3e6_7c84_4f25_896e_e3a542bdff19.slice/crio-135f928b9a05f3dec1a9e2efd155a1c675b524461615d2c89d0c2f822c06b721 WatchSource:0}: Error finding container 135f928b9a05f3dec1a9e2efd155a1c675b524461615d2c89d0c2f822c06b721: Status 404 returned error can't find the container with id 135f928b9a05f3dec1a9e2efd155a1c675b524461615d2c89d0c2f822c06b721 Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.524642 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb"] Feb 19 13:02:03 crc kubenswrapper[4833]: W0219 13:02:03.525688 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda09fe0a0_c328_4306_b1de_c8bddc00378f.slice/crio-dd420f69d60d73542e357a0ee23c000f692dd4c169fec223e7f3956375e699bf WatchSource:0}: Error finding container dd420f69d60d73542e357a0ee23c000f692dd4c169fec223e7f3956375e699bf: Status 404 returned error can't find the container with id dd420f69d60d73542e357a0ee23c000f692dd4c169fec223e7f3956375e699bf Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.531661 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws"] Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.536956 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq"] Feb 19 13:02:03 crc kubenswrapper[4833]: W0219 13:02:03.537673 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a2db5f5_bbec_4673_b32b_eef31c488a12.slice/crio-508a823c33e0cac9431e46a3f21981d9c3ba4a2e9cf2ee577d2a640a7cde91fb WatchSource:0}: Error finding container 508a823c33e0cac9431e46a3f21981d9c3ba4a2e9cf2ee577d2a640a7cde91fb: Status 404 returned error can't find the container with id 508a823c33e0cac9431e46a3f21981d9c3ba4a2e9cf2ee577d2a640a7cde91fb Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.540140 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx"] Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.540710 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-srst8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-hrnmv_openstack-operators(3a2db5f5-bbec-4673-b32b-eef31c488a12): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.542755 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" podUID="3a2db5f5-bbec-4673-b32b-eef31c488a12" Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.549664 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv"] Feb 19 13:02:03 crc kubenswrapper[4833]: W0219 13:02:03.550968 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeed2b359_6b1f_4db4_947a_6ed3bf4385cc.slice/crio-d4d6604446b656b6c1f8586ef4bf4fe4221eb985866d56ad05fb58363096c950 WatchSource:0}: Error finding container d4d6604446b656b6c1f8586ef4bf4fe4221eb985866d56ad05fb58363096c950: Status 404 returned error can't find the container with id d4d6604446b656b6c1f8586ef4bf4fe4221eb985866d56ad05fb58363096c950 Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.552089 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-79vj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-bbl2n_openstack-operators(eddad8e9-ebc8-4772-9b30-76fc7bd09919): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.552690 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n"] Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.553220 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" podUID="eddad8e9-ebc8-4772-9b30-76fc7bd09919" Feb 19 13:02:03 crc kubenswrapper[4833]: W0219 13:02:03.558531 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0e6cafc_957b_4ebd_ad08_1bef03debe49.slice/crio-51a5976386f435b3328de0ed5b794d0f445fba7cbbe72a736e00e6e6fd630984 WatchSource:0}: Error finding container 51a5976386f435b3328de0ed5b794d0f445fba7cbbe72a736e00e6e6fd630984: Status 404 returned error can't find the container with id 51a5976386f435b3328de0ed5b794d0f445fba7cbbe72a736e00e6e6fd630984 Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.559790 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tddlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-554564d7fc-7d2vx_openstack-operators(eed2b359-6b1f-4db4-947a-6ed3bf4385cc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.560471 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm"] Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.560874 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" podUID="eed2b359-6b1f-4db4-947a-6ed3bf4385cc" Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.560892 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qsrqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7f45b4ff68-lw9pq_openstack-operators(e0e6cafc-957b-4ebd-ad08-1bef03debe49): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:02:03 crc kubenswrapper[4833]: W0219 13:02:03.560972 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0168dd3a_5296_440d_8b46_d858da1cfeb6.slice/crio-e7cf394735662b194080a653725cd8de0c9e832ec53751ed7e4e7df7c5b2310f WatchSource:0}: Error finding container e7cf394735662b194080a653725cd8de0c9e832ec53751ed7e4e7df7c5b2310f: Status 404 returned error can't find the container with id e7cf394735662b194080a653725cd8de0c9e832ec53751ed7e4e7df7c5b2310f Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.561074 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcz6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-24cxm_openstack-operators(d7b1ebb3-ea0b-4e2f-b27a-e77abee17693): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.563587 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" podUID="e0e6cafc-957b-4ebd-ad08-1bef03debe49" Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.563581 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" podUID="d7b1ebb3-ea0b-4e2f-b27a-e77abee17693" Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.566677 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx"] Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.567207 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5qgxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-ng9mx_openstack-operators(0168dd3a-5296-440d-8b46-d858da1cfeb6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.568680 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" podUID="0168dd3a-5296-440d-8b46-d858da1cfeb6" Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.571718 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-cn6cv"] Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.575927 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq"] Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.576093 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rv6bw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zwzkq_openstack-operators(1e12420e-fd8b-4ef2-bc12-9b3be0efa58a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.578144 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq" podUID="1e12420e-fd8b-4ef2-bc12-9b3be0efa58a" Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.634721 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-br7m9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-cn6cv_openstack-operators(84b0c5a7-e111-4ee9-999b-5da00d00ffd0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.636627 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" podUID="84b0c5a7-e111-4ee9-999b-5da00d00ffd0" Feb 19 13:02:03 crc kubenswrapper[4833]: I0219 13:02:03.780844 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.781206 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:03 crc kubenswrapper[4833]: E0219 13:02:03.781276 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert podName:a8783b50-8a5e-4c9f-8f4b-513e4e0c7122 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:05.781257728 +0000 UTC m=+936.176776496 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" (UID: "a8783b50-8a5e-4c9f-8f4b-513e4e0c7122") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.086085 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.086172 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.086295 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.086340 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:06.086327576 +0000 UTC m=+936.481846344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "metrics-server-cert" not found Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.086640 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.086664 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:06.086656914 +0000 UTC m=+936.482175682 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "webhook-server-cert" not found Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.403097 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" event={"ID":"e0e6cafc-957b-4ebd-ad08-1bef03debe49","Type":"ContainerStarted","Data":"51a5976386f435b3328de0ed5b794d0f445fba7cbbe72a736e00e6e6fd630984"} Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.404888 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" event={"ID":"d7b1ebb3-ea0b-4e2f-b27a-e77abee17693","Type":"ContainerStarted","Data":"6be3af60e0d1a2ebb173e0947d4e1fa62d8f7df50186fa1c1ecc138216d8364f"} Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.406457 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" podUID="e0e6cafc-957b-4ebd-ad08-1bef03debe49" Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.406706 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" podUID="d7b1ebb3-ea0b-4e2f-b27a-e77abee17693" Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.407375 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq" event={"ID":"1e12420e-fd8b-4ef2-bc12-9b3be0efa58a","Type":"ContainerStarted","Data":"31a7a1491e158ec72d92c00e88aa75f3128efb617a92a113e6fa6d7575b3e4a1"} Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.408249 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq" podUID="1e12420e-fd8b-4ef2-bc12-9b3be0efa58a" Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.408862 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb" event={"ID":"636db3e6-7c84-4f25-896e-e3a542bdff19","Type":"ContainerStarted","Data":"135f928b9a05f3dec1a9e2efd155a1c675b524461615d2c89d0c2f822c06b721"} Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.409788 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" event={"ID":"0168dd3a-5296-440d-8b46-d858da1cfeb6","Type":"ContainerStarted","Data":"e7cf394735662b194080a653725cd8de0c9e832ec53751ed7e4e7df7c5b2310f"} Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.410959 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" podUID="0168dd3a-5296-440d-8b46-d858da1cfeb6" Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.411653 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" event={"ID":"eddad8e9-ebc8-4772-9b30-76fc7bd09919","Type":"ContainerStarted","Data":"62f6a86a6775ca9b7fd958e16ad3b67ae6f855e92244ffdbb8f1c2de52591ced"} Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.413005 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" podUID="eddad8e9-ebc8-4772-9b30-76fc7bd09919" Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.413834 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws" event={"ID":"a09fe0a0-c328-4306-b1de-c8bddc00378f","Type":"ContainerStarted","Data":"dd420f69d60d73542e357a0ee23c000f692dd4c169fec223e7f3956375e699bf"} Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.420292 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" event={"ID":"3a2db5f5-bbec-4673-b32b-eef31c488a12","Type":"ContainerStarted","Data":"508a823c33e0cac9431e46a3f21981d9c3ba4a2e9cf2ee577d2a640a7cde91fb"} Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.422685 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" podUID="3a2db5f5-bbec-4673-b32b-eef31c488a12" Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.423308 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" event={"ID":"eed2b359-6b1f-4db4-947a-6ed3bf4385cc","Type":"ContainerStarted","Data":"d4d6604446b656b6c1f8586ef4bf4fe4221eb985866d56ad05fb58363096c950"} Feb 19 13:02:04 crc kubenswrapper[4833]: I0219 13:02:04.425148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" event={"ID":"84b0c5a7-e111-4ee9-999b-5da00d00ffd0","Type":"ContainerStarted","Data":"e14dfd9aada1e2df92ca714f5226d2b9516fce7a3f6eefa96bf6df1b0ded0a15"} Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.425424 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" podUID="eed2b359-6b1f-4db4-947a-6ed3bf4385cc" Feb 19 13:02:04 crc kubenswrapper[4833]: E0219 13:02:04.426155 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" podUID="84b0c5a7-e111-4ee9-999b-5da00d00ffd0" Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.438363 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" podUID="e0e6cafc-957b-4ebd-ad08-1bef03debe49" Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.443011 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq" podUID="1e12420e-fd8b-4ef2-bc12-9b3be0efa58a" Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.456590 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" podUID="84b0c5a7-e111-4ee9-999b-5da00d00ffd0" Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.456835 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" podUID="eed2b359-6b1f-4db4-947a-6ed3bf4385cc" Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.457417 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" podUID="eddad8e9-ebc8-4772-9b30-76fc7bd09919" Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.457468 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" podUID="3a2db5f5-bbec-4673-b32b-eef31c488a12" Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.457529 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" podUID="d7b1ebb3-ea0b-4e2f-b27a-e77abee17693" Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.457532 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" podUID="0168dd3a-5296-440d-8b46-d858da1cfeb6" Feb 19 13:02:05 crc kubenswrapper[4833]: I0219 13:02:05.516719 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.517398 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.517512 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert podName:ab70788d-b168-497b-bea0-4847ee80ce73 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:09.51746651 +0000 UTC m=+939.912985278 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert") pod "infra-operator-controller-manager-79d975b745-cvzzp" (UID: "ab70788d-b168-497b-bea0-4847ee80ce73") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:05 crc kubenswrapper[4833]: I0219 13:02:05.821884 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.822056 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:05 crc kubenswrapper[4833]: E0219 13:02:05.822136 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert podName:a8783b50-8a5e-4c9f-8f4b-513e4e0c7122 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:09.822115698 +0000 UTC m=+940.217634466 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" (UID: "a8783b50-8a5e-4c9f-8f4b-513e4e0c7122") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:06 crc kubenswrapper[4833]: I0219 13:02:06.143288 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:06 crc kubenswrapper[4833]: I0219 13:02:06.143372 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:06 crc kubenswrapper[4833]: E0219 13:02:06.143421 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:02:06 crc kubenswrapper[4833]: E0219 13:02:06.143476 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:02:06 crc kubenswrapper[4833]: E0219 13:02:06.143516 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:10.143472858 +0000 UTC m=+940.538991626 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "webhook-server-cert" not found Feb 19 13:02:06 crc kubenswrapper[4833]: E0219 13:02:06.143548 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:10.143536449 +0000 UTC m=+940.539055217 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "metrics-server-cert" not found Feb 19 13:02:09 crc kubenswrapper[4833]: I0219 13:02:09.588777 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:09 crc kubenswrapper[4833]: E0219 13:02:09.589297 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:09 crc kubenswrapper[4833]: E0219 13:02:09.589343 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert podName:ab70788d-b168-497b-bea0-4847ee80ce73 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:17.5893298 +0000 UTC m=+947.984848568 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert") pod "infra-operator-controller-manager-79d975b745-cvzzp" (UID: "ab70788d-b168-497b-bea0-4847ee80ce73") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:09 crc kubenswrapper[4833]: I0219 13:02:09.893332 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:09 crc kubenswrapper[4833]: E0219 13:02:09.893545 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:09 crc kubenswrapper[4833]: E0219 13:02:09.893614 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert podName:a8783b50-8a5e-4c9f-8f4b-513e4e0c7122 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:17.893596319 +0000 UTC m=+948.289115087 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" (UID: "a8783b50-8a5e-4c9f-8f4b-513e4e0c7122") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:10 crc kubenswrapper[4833]: I0219 13:02:10.199083 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:10 crc kubenswrapper[4833]: I0219 13:02:10.199168 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:10 crc kubenswrapper[4833]: E0219 13:02:10.199310 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:02:10 crc kubenswrapper[4833]: E0219 13:02:10.199315 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:02:10 crc kubenswrapper[4833]: E0219 13:02:10.199364 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:18.199349964 +0000 UTC m=+948.594868732 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "metrics-server-cert" not found Feb 19 13:02:10 crc kubenswrapper[4833]: E0219 13:02:10.199399 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:18.199376765 +0000 UTC m=+948.594895633 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "webhook-server-cert" not found Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.523227 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws" event={"ID":"a09fe0a0-c328-4306-b1de-c8bddc00378f","Type":"ContainerStarted","Data":"3d027fb151260c6258096f6530ff17f0333d1fb9d5596c247892cfdee5cc09b3"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.523654 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.524754 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf" event={"ID":"e6de77c2-2965-48a3-a79a-75539ca32b8b","Type":"ContainerStarted","Data":"8f53416e6c717773023a5347d82de248d4e66df7e32171511780c54695cdccd7"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.524872 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.526067 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c" event={"ID":"8df5aecb-140d-4845-b07c-ab75586e4b54","Type":"ContainerStarted","Data":"e42b7007ab0ee377a973e4f01d51028cafb4507f360ae1105da28331e2737b05"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.526198 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.528792 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d" event={"ID":"a14096df-2211-4053-afb4-ad8d68ff0723","Type":"ContainerStarted","Data":"7eb5d6ce734d2f101ffe45646301b23bb24cb695a115e44aa8cccb9646218258"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.528857 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.529979 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt" event={"ID":"eaf010f7-5113-4970-b963-682d17243fc9","Type":"ContainerStarted","Data":"633f5210563668d08737b361d52b5337a740f4c4c758d3039c25420593e99f8f"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.530317 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.531583 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7" event={"ID":"dd33e5e9-4983-4954-966e-a693cc5c299b","Type":"ContainerStarted","Data":"858867b03a90ac146a0c726e0f621abdf7ee5a5be1be73f723f9107ecb199632"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.531914 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.533727 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9" event={"ID":"4deeacce-2501-4276-98cf-cb615e0b4dce","Type":"ContainerStarted","Data":"3135d8ecd6e2e470935af9bfb77c3979d4798b6791b670c2137101d649193170"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.534045 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.535683 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb" event={"ID":"636db3e6-7c84-4f25-896e-e3a542bdff19","Type":"ContainerStarted","Data":"a60294e563925d752116ccdc0954b1212dfdf8b3fde0b9d03a4c0b6f54b2924b"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.535729 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.536882 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hq98q" event={"ID":"c960bafe-e1ce-4635-a849-758a84db3b0e","Type":"ContainerStarted","Data":"3cac34abeb7d9ece9cd12fd91ada3d39e74ebc3f28eb9b8178395c3ce0ba9018"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.536937 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hq98q" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.538125 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc" event={"ID":"4b94f9da-5e45-4428-a709-24574552d77e","Type":"ContainerStarted","Data":"11b2f4a90a1196fb0799fa25d3d601456cbeb8d5a7185c5ad88474f686f51e9b"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.538305 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.539345 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd" event={"ID":"1ba8dd89-0865-4766-b216-b906d4d6f77a","Type":"ContainerStarted","Data":"2854c9ae356e9edac06d4f2d94d78bf976d8e08c0c651ab65d82c10164c499ba"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.539423 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.540148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9" event={"ID":"70feab77-0665-499a-b6e2-b35b95384ab7","Type":"ContainerStarted","Data":"3a70ab2ec1172b2ef5b9d830502ea694fa33f99da1b3a08321d19f430022c4e4"} Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.540447 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.592227 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws" podStartSLOduration=3.081066095 podStartE2EDuration="14.592211686s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.533038538 +0000 UTC m=+933.928557306" lastFinishedPulling="2026-02-19 13:02:15.044184129 +0000 UTC m=+945.439702897" observedRunningTime="2026-02-19 13:02:15.58627932 +0000 UTC m=+945.981798088" watchObservedRunningTime="2026-02-19 13:02:15.592211686 +0000 UTC m=+945.987730454" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.638778 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d" podStartSLOduration=3.003483407 podStartE2EDuration="14.638761192s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.334106192 +0000 UTC m=+933.729624960" lastFinishedPulling="2026-02-19 13:02:14.969383977 +0000 UTC m=+945.364902745" observedRunningTime="2026-02-19 13:02:15.637003959 +0000 UTC m=+946.032522717" watchObservedRunningTime="2026-02-19 13:02:15.638761192 +0000 UTC m=+946.034279960" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.668010 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd" podStartSLOduration=2.900133062 podStartE2EDuration="14.667994111s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.188004076 +0000 UTC m=+933.583522844" lastFinishedPulling="2026-02-19 13:02:14.955865125 +0000 UTC m=+945.351383893" observedRunningTime="2026-02-19 13:02:15.667248103 +0000 UTC m=+946.062766871" watchObservedRunningTime="2026-02-19 13:02:15.667994111 +0000 UTC m=+946.063512869" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.728711 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf" podStartSLOduration=2.952358678 podStartE2EDuration="14.728698675s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.188684863 +0000 UTC m=+933.584203631" lastFinishedPulling="2026-02-19 13:02:14.96502486 +0000 UTC m=+945.360543628" observedRunningTime="2026-02-19 13:02:15.726897841 +0000 UTC m=+946.122416609" watchObservedRunningTime="2026-02-19 13:02:15.728698675 +0000 UTC m=+946.124217433" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.744125 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.744178 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.824162 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9" podStartSLOduration=3.198570738 podStartE2EDuration="14.824148685s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.339126075 +0000 UTC m=+933.734644843" lastFinishedPulling="2026-02-19 13:02:14.964704022 +0000 UTC m=+945.360222790" observedRunningTime="2026-02-19 13:02:15.765818069 +0000 UTC m=+946.161336837" watchObservedRunningTime="2026-02-19 13:02:15.824148685 +0000 UTC m=+946.219667453" Feb 19 13:02:15 crc kubenswrapper[4833]: I0219 13:02:15.879727 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9" podStartSLOduration=3.253414708 podStartE2EDuration="14.879709062s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.329272773 +0000 UTC m=+933.724791541" lastFinishedPulling="2026-02-19 13:02:14.955567117 +0000 UTC m=+945.351085895" observedRunningTime="2026-02-19 13:02:15.82882164 +0000 UTC m=+946.224340408" watchObservedRunningTime="2026-02-19 13:02:15.879709062 +0000 UTC m=+946.275227830" Feb 19 13:02:16 crc kubenswrapper[4833]: I0219 13:02:16.013417 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb" podStartSLOduration=3.580719204 podStartE2EDuration="15.013401133s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.524618351 +0000 UTC m=+933.920137119" lastFinishedPulling="2026-02-19 13:02:14.95730027 +0000 UTC m=+945.352819048" observedRunningTime="2026-02-19 13:02:15.882710606 +0000 UTC m=+946.278229374" watchObservedRunningTime="2026-02-19 13:02:16.013401133 +0000 UTC m=+946.408919901" Feb 19 13:02:16 crc kubenswrapper[4833]: I0219 13:02:16.064524 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hq98q" podStartSLOduration=3.140372786 podStartE2EDuration="15.064504861s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.040795773 +0000 UTC m=+933.436314551" lastFinishedPulling="2026-02-19 13:02:14.964927858 +0000 UTC m=+945.360446626" observedRunningTime="2026-02-19 13:02:16.015088544 +0000 UTC m=+946.410607302" watchObservedRunningTime="2026-02-19 13:02:16.064504861 +0000 UTC m=+946.460023629" Feb 19 13:02:16 crc kubenswrapper[4833]: I0219 13:02:16.067443 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt" podStartSLOduration=3.44403456 podStartE2EDuration="15.067428193s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.331544629 +0000 UTC m=+933.727063397" lastFinishedPulling="2026-02-19 13:02:14.954938262 +0000 UTC m=+945.350457030" observedRunningTime="2026-02-19 13:02:16.057035327 +0000 UTC m=+946.452554095" watchObservedRunningTime="2026-02-19 13:02:16.067428193 +0000 UTC m=+946.462946961" Feb 19 13:02:16 crc kubenswrapper[4833]: I0219 13:02:16.081191 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc" podStartSLOduration=3.316244914 podStartE2EDuration="15.081170861s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.190160119 +0000 UTC m=+933.585678887" lastFinishedPulling="2026-02-19 13:02:14.955086066 +0000 UTC m=+945.350604834" observedRunningTime="2026-02-19 13:02:16.076078945 +0000 UTC m=+946.471597713" watchObservedRunningTime="2026-02-19 13:02:16.081170861 +0000 UTC m=+946.476689629" Feb 19 13:02:16 crc kubenswrapper[4833]: I0219 13:02:16.101540 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7" podStartSLOduration=3.188867049 podStartE2EDuration="15.101525412s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.036130328 +0000 UTC m=+933.431649096" lastFinishedPulling="2026-02-19 13:02:14.948788691 +0000 UTC m=+945.344307459" observedRunningTime="2026-02-19 13:02:16.098003895 +0000 UTC m=+946.493522653" watchObservedRunningTime="2026-02-19 13:02:16.101525412 +0000 UTC m=+946.497044180" Feb 19 13:02:16 crc kubenswrapper[4833]: I0219 13:02:16.125904 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c" podStartSLOduration=3.286520862 podStartE2EDuration="15.125888461s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.189475632 +0000 UTC m=+933.584994400" lastFinishedPulling="2026-02-19 13:02:15.028843231 +0000 UTC m=+945.424361999" observedRunningTime="2026-02-19 13:02:16.120791816 +0000 UTC m=+946.516310584" watchObservedRunningTime="2026-02-19 13:02:16.125888461 +0000 UTC m=+946.521407229" Feb 19 13:02:17 crc kubenswrapper[4833]: I0219 13:02:17.609738 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:17 crc kubenswrapper[4833]: E0219 13:02:17.609914 4833 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:17 crc kubenswrapper[4833]: E0219 13:02:17.610226 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert podName:ab70788d-b168-497b-bea0-4847ee80ce73 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:33.610205755 +0000 UTC m=+964.005724533 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert") pod "infra-operator-controller-manager-79d975b745-cvzzp" (UID: "ab70788d-b168-497b-bea0-4847ee80ce73") : secret "infra-operator-webhook-server-cert" not found Feb 19 13:02:17 crc kubenswrapper[4833]: I0219 13:02:17.915339 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:17 crc kubenswrapper[4833]: E0219 13:02:17.915567 4833 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:17 crc kubenswrapper[4833]: E0219 13:02:17.915675 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert podName:a8783b50-8a5e-4c9f-8f4b-513e4e0c7122 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:33.915628732 +0000 UTC m=+964.311147510 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" (UID: "a8783b50-8a5e-4c9f-8f4b-513e4e0c7122") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 19 13:02:18 crc kubenswrapper[4833]: I0219 13:02:18.218976 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:18 crc kubenswrapper[4833]: I0219 13:02:18.219056 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:18 crc kubenswrapper[4833]: E0219 13:02:18.219196 4833 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 19 13:02:18 crc kubenswrapper[4833]: E0219 13:02:18.219244 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:34.219229785 +0000 UTC m=+964.614748553 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "metrics-server-cert" not found Feb 19 13:02:18 crc kubenswrapper[4833]: E0219 13:02:18.219462 4833 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 19 13:02:18 crc kubenswrapper[4833]: E0219 13:02:18.219555 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs podName:81d2c5dc-91fd-4135-8408-104fc7badb60 nodeName:}" failed. No retries permitted until 2026-02-19 13:02:34.219536432 +0000 UTC m=+964.615055200 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs") pod "openstack-operator-controller-manager-744c6f7bcc-jsmlm" (UID: "81d2c5dc-91fd-4135-8408-104fc7badb60") : secret "webhook-server-cert" not found Feb 19 13:02:20 crc kubenswrapper[4833]: I0219 13:02:20.574708 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" event={"ID":"d7b1ebb3-ea0b-4e2f-b27a-e77abee17693","Type":"ContainerStarted","Data":"bc9007955fc5892a408a98998906e5eaa055cade47f215eae96d0f65f1e73183"} Feb 19 13:02:20 crc kubenswrapper[4833]: I0219 13:02:20.576170 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" Feb 19 13:02:20 crc kubenswrapper[4833]: I0219 13:02:20.577724 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" event={"ID":"3a2db5f5-bbec-4673-b32b-eef31c488a12","Type":"ContainerStarted","Data":"4defbab2da7f569a8875c9e71cfdb308a5e3437201afbcbbce3fb09ea05e6e61"} Feb 19 13:02:20 crc kubenswrapper[4833]: I0219 13:02:20.578214 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" Feb 19 13:02:20 crc kubenswrapper[4833]: I0219 13:02:20.596045 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" podStartSLOduration=2.16695604 podStartE2EDuration="18.596024873s" podCreationTimestamp="2026-02-19 13:02:02 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.560996976 +0000 UTC m=+933.956515744" lastFinishedPulling="2026-02-19 13:02:19.990065809 +0000 UTC m=+950.385584577" observedRunningTime="2026-02-19 13:02:20.59222924 +0000 UTC m=+950.987748008" watchObservedRunningTime="2026-02-19 13:02:20.596024873 +0000 UTC m=+950.991543641" Feb 19 13:02:20 crc kubenswrapper[4833]: I0219 13:02:20.618669 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" podStartSLOduration=3.147357528 podStartE2EDuration="19.61865126s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.540551403 +0000 UTC m=+933.936070181" lastFinishedPulling="2026-02-19 13:02:20.011845145 +0000 UTC m=+950.407363913" observedRunningTime="2026-02-19 13:02:20.611643768 +0000 UTC m=+951.007162536" watchObservedRunningTime="2026-02-19 13:02:20.61865126 +0000 UTC m=+951.014170028" Feb 19 13:02:21 crc kubenswrapper[4833]: I0219 13:02:21.892383 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-ztqm7" Feb 19 13:02:22 crc kubenswrapper[4833]: I0219 13:02:22.004764 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-hq98q" Feb 19 13:02:22 crc kubenswrapper[4833]: I0219 13:02:22.029932 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-wthd9" Feb 19 13:02:22 crc kubenswrapper[4833]: I0219 13:02:22.139888 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-hprlt" Feb 19 13:02:22 crc kubenswrapper[4833]: I0219 13:02:22.147626 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-7n7vf" Feb 19 13:02:22 crc kubenswrapper[4833]: I0219 13:02:22.164698 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-njc9d" Feb 19 13:02:22 crc kubenswrapper[4833]: I0219 13:02:22.172041 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-hn9cd" Feb 19 13:02:22 crc kubenswrapper[4833]: I0219 13:02:22.178369 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-g59gc" Feb 19 13:02:22 crc kubenswrapper[4833]: I0219 13:02:22.188591 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-w4n4c" Feb 19 13:02:22 crc kubenswrapper[4833]: I0219 13:02:22.332054 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-7z4m9" Feb 19 13:02:22 crc kubenswrapper[4833]: I0219 13:02:22.383022 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-cn5hb" Feb 19 13:02:22 crc kubenswrapper[4833]: I0219 13:02:22.400615 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-8wsws" Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.631561 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" event={"ID":"eddad8e9-ebc8-4772-9b30-76fc7bd09919","Type":"ContainerStarted","Data":"d4578cd297727a381b41ab19c4824b2391f5af51a59ee79e3bb5d8b215be534d"} Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.632303 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.632918 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" event={"ID":"e0e6cafc-957b-4ebd-ad08-1bef03debe49","Type":"ContainerStarted","Data":"a9a6bace496d29a07f2a3fc515c4d1afedecc36f4aa82cc4dbc79f5de243af6a"} Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.633066 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.634422 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" event={"ID":"84b0c5a7-e111-4ee9-999b-5da00d00ffd0","Type":"ContainerStarted","Data":"d7b2aedb82f2ad679b00d0432c7daab117ae185fbd885836af3120d6f511af8e"} Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.634610 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.636068 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq" event={"ID":"1e12420e-fd8b-4ef2-bc12-9b3be0efa58a","Type":"ContainerStarted","Data":"74ee3c4daf75a623d0dbacb31c1b511a1d56b1700fcc5e7719c20b0c556a155c"} Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.637585 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" event={"ID":"eed2b359-6b1f-4db4-947a-6ed3bf4385cc","Type":"ContainerStarted","Data":"15ab71503a41b8107be925b097d78aa94f0de305fc60021d7888d5591d819045"} Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.637723 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.638991 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" event={"ID":"0168dd3a-5296-440d-8b46-d858da1cfeb6","Type":"ContainerStarted","Data":"c50fa3814ed4952d6713883c2aa614d594d6a40e4201d7742a18bd29d0213677"} Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.639114 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.651001 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" podStartSLOduration=3.419003724 podStartE2EDuration="27.650986167s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.551975814 +0000 UTC m=+933.947494582" lastFinishedPulling="2026-02-19 13:02:27.783958247 +0000 UTC m=+958.179477025" observedRunningTime="2026-02-19 13:02:28.649035669 +0000 UTC m=+959.044554437" watchObservedRunningTime="2026-02-19 13:02:28.650986167 +0000 UTC m=+959.046504935" Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.669721 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" podStartSLOduration=3.445391864 podStartE2EDuration="27.669704468s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.559671014 +0000 UTC m=+933.955189782" lastFinishedPulling="2026-02-19 13:02:27.783983618 +0000 UTC m=+958.179502386" observedRunningTime="2026-02-19 13:02:28.665272399 +0000 UTC m=+959.060791167" watchObservedRunningTime="2026-02-19 13:02:28.669704468 +0000 UTC m=+959.065223236" Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.690090 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" podStartSLOduration=2.454111098 podStartE2EDuration="26.690070299s" podCreationTimestamp="2026-02-19 13:02:02 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.634596288 +0000 UTC m=+934.030115056" lastFinishedPulling="2026-02-19 13:02:27.870555489 +0000 UTC m=+958.266074257" observedRunningTime="2026-02-19 13:02:28.68926051 +0000 UTC m=+959.084779278" watchObservedRunningTime="2026-02-19 13:02:28.690070299 +0000 UTC m=+959.085589067" Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.709056 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" podStartSLOduration=3.476663283 podStartE2EDuration="27.709039716s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.560775521 +0000 UTC m=+933.956294289" lastFinishedPulling="2026-02-19 13:02:27.793151954 +0000 UTC m=+958.188670722" observedRunningTime="2026-02-19 13:02:28.70511016 +0000 UTC m=+959.100628928" watchObservedRunningTime="2026-02-19 13:02:28.709039716 +0000 UTC m=+959.104558484" Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.721156 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" podStartSLOduration=3.441051317 podStartE2EDuration="27.721139724s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.567089156 +0000 UTC m=+933.962607924" lastFinishedPulling="2026-02-19 13:02:27.847177523 +0000 UTC m=+958.242696331" observedRunningTime="2026-02-19 13:02:28.718752645 +0000 UTC m=+959.114271413" watchObservedRunningTime="2026-02-19 13:02:28.721139724 +0000 UTC m=+959.116658492" Feb 19 13:02:28 crc kubenswrapper[4833]: I0219 13:02:28.734672 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zwzkq" podStartSLOduration=2.463317645 podStartE2EDuration="26.734656997s" podCreationTimestamp="2026-02-19 13:02:02 +0000 UTC" firstStartedPulling="2026-02-19 13:02:03.575993675 +0000 UTC m=+933.971512443" lastFinishedPulling="2026-02-19 13:02:27.847332987 +0000 UTC m=+958.242851795" observedRunningTime="2026-02-19 13:02:28.731176661 +0000 UTC m=+959.126695429" watchObservedRunningTime="2026-02-19 13:02:28.734656997 +0000 UTC m=+959.130175765" Feb 19 13:02:32 crc kubenswrapper[4833]: I0219 13:02:32.396559 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-hrnmv" Feb 19 13:02:32 crc kubenswrapper[4833]: I0219 13:02:32.598957 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-24cxm" Feb 19 13:02:33 crc kubenswrapper[4833]: I0219 13:02:33.651810 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:33 crc kubenswrapper[4833]: I0219 13:02:33.664597 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ab70788d-b168-497b-bea0-4847ee80ce73-cert\") pod \"infra-operator-controller-manager-79d975b745-cvzzp\" (UID: \"ab70788d-b168-497b-bea0-4847ee80ce73\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:33 crc kubenswrapper[4833]: I0219 13:02:33.852258 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qnltf" Feb 19 13:02:33 crc kubenswrapper[4833]: I0219 13:02:33.860403 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:33 crc kubenswrapper[4833]: I0219 13:02:33.955824 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:33 crc kubenswrapper[4833]: I0219 13:02:33.966305 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8783b50-8a5e-4c9f-8f4b-513e4e0c7122-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz\" (UID: \"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.138692 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-v2dk4" Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.147997 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.260304 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.260594 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.263941 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-metrics-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.264622 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/81d2c5dc-91fd-4135-8408-104fc7badb60-webhook-certs\") pod \"openstack-operator-controller-manager-744c6f7bcc-jsmlm\" (UID: \"81d2c5dc-91fd-4135-8408-104fc7badb60\") " pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.276075 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp"] Feb 19 13:02:34 crc kubenswrapper[4833]: W0219 13:02:34.287704 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab70788d_b168_497b_bea0_4847ee80ce73.slice/crio-e82ca82beb94fdc098a6a74119f87e80ea5d6cdd060ee25bbecddef5c556610f WatchSource:0}: Error finding container e82ca82beb94fdc098a6a74119f87e80ea5d6cdd060ee25bbecddef5c556610f: Status 404 returned error can't find the container with id e82ca82beb94fdc098a6a74119f87e80ea5d6cdd060ee25bbecddef5c556610f Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.371957 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz"] Feb 19 13:02:34 crc kubenswrapper[4833]: W0219 13:02:34.372130 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8783b50_8a5e_4c9f_8f4b_513e4e0c7122.slice/crio-0de764f6713c664b72e7553db2d0173eefd6f6bb81d1bf2e9a6dad276e1e9725 WatchSource:0}: Error finding container 0de764f6713c664b72e7553db2d0173eefd6f6bb81d1bf2e9a6dad276e1e9725: Status 404 returned error can't find the container with id 0de764f6713c664b72e7553db2d0173eefd6f6bb81d1bf2e9a6dad276e1e9725 Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.416622 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bv8h4" Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.425039 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.676194 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" event={"ID":"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122","Type":"ContainerStarted","Data":"0de764f6713c664b72e7553db2d0173eefd6f6bb81d1bf2e9a6dad276e1e9725"} Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.678020 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" event={"ID":"ab70788d-b168-497b-bea0-4847ee80ce73","Type":"ContainerStarted","Data":"e82ca82beb94fdc098a6a74119f87e80ea5d6cdd060ee25bbecddef5c556610f"} Feb 19 13:02:34 crc kubenswrapper[4833]: I0219 13:02:34.842381 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm"] Feb 19 13:02:35 crc kubenswrapper[4833]: I0219 13:02:35.693938 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" event={"ID":"81d2c5dc-91fd-4135-8408-104fc7badb60","Type":"ContainerStarted","Data":"87b423ab1ab083e9b8d21571ba002cec283129eebfe341c23d564c91f27705f8"} Feb 19 13:02:36 crc kubenswrapper[4833]: I0219 13:02:36.701841 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" event={"ID":"81d2c5dc-91fd-4135-8408-104fc7badb60","Type":"ContainerStarted","Data":"37e04e5b3ec95d6211007244ec225c354ff0fde4594c7dc9c5ab4436126e5352"} Feb 19 13:02:36 crc kubenswrapper[4833]: I0219 13:02:36.702056 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:36 crc kubenswrapper[4833]: I0219 13:02:36.777147 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" podStartSLOduration=34.777131934 podStartE2EDuration="34.777131934s" podCreationTimestamp="2026-02-19 13:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:02:36.773065094 +0000 UTC m=+967.168583862" watchObservedRunningTime="2026-02-19 13:02:36.777131934 +0000 UTC m=+967.172650702" Feb 19 13:02:39 crc kubenswrapper[4833]: I0219 13:02:39.723148 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" event={"ID":"ab70788d-b168-497b-bea0-4847ee80ce73","Type":"ContainerStarted","Data":"66ea1e9f484b51173738fe77fe24798c0c479f414ce4a3ca23ff31cbcc8d9e59"} Feb 19 13:02:39 crc kubenswrapper[4833]: I0219 13:02:39.724769 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:02:39 crc kubenswrapper[4833]: I0219 13:02:39.724917 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" event={"ID":"a8783b50-8a5e-4c9f-8f4b-513e4e0c7122","Type":"ContainerStarted","Data":"d3b06d4a8491db9fc4d13a28b46405cebbd4f45a519bb4c08230b8af32640180"} Feb 19 13:02:39 crc kubenswrapper[4833]: I0219 13:02:39.725059 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:39 crc kubenswrapper[4833]: I0219 13:02:39.736152 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" podStartSLOduration=33.881708413 podStartE2EDuration="38.736135153s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:34.289965768 +0000 UTC m=+964.685484536" lastFinishedPulling="2026-02-19 13:02:39.144392488 +0000 UTC m=+969.539911276" observedRunningTime="2026-02-19 13:02:39.734593535 +0000 UTC m=+970.130112303" watchObservedRunningTime="2026-02-19 13:02:39.736135153 +0000 UTC m=+970.131653921" Feb 19 13:02:39 crc kubenswrapper[4833]: I0219 13:02:39.772090 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" podStartSLOduration=33.982182955 podStartE2EDuration="38.772067667s" podCreationTimestamp="2026-02-19 13:02:01 +0000 UTC" firstStartedPulling="2026-02-19 13:02:34.373803661 +0000 UTC m=+964.769322429" lastFinishedPulling="2026-02-19 13:02:39.163688363 +0000 UTC m=+969.559207141" observedRunningTime="2026-02-19 13:02:39.763168548 +0000 UTC m=+970.158687326" watchObservedRunningTime="2026-02-19 13:02:39.772067667 +0000 UTC m=+970.167586435" Feb 19 13:02:42 crc kubenswrapper[4833]: I0219 13:02:42.040675 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-ng9mx" Feb 19 13:02:42 crc kubenswrapper[4833]: I0219 13:02:42.355643 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-bbl2n" Feb 19 13:02:42 crc kubenswrapper[4833]: I0219 13:02:42.383236 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-7d2vx" Feb 19 13:02:42 crc kubenswrapper[4833]: I0219 13:02:42.500773 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-lw9pq" Feb 19 13:02:42 crc kubenswrapper[4833]: I0219 13:02:42.551399 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-cn6cv" Feb 19 13:02:44 crc kubenswrapper[4833]: I0219 13:02:44.157387 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz" Feb 19 13:02:44 crc kubenswrapper[4833]: I0219 13:02:44.431899 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-744c6f7bcc-jsmlm" Feb 19 13:02:45 crc kubenswrapper[4833]: I0219 13:02:45.744554 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:02:45 crc kubenswrapper[4833]: I0219 13:02:45.744942 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:02:53 crc kubenswrapper[4833]: I0219 13:02:53.868953 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-cvzzp" Feb 19 13:03:04 crc kubenswrapper[4833]: I0219 13:03:04.697909 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8jh"] Feb 19 13:03:04 crc kubenswrapper[4833]: I0219 13:03:04.701287 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:04 crc kubenswrapper[4833]: I0219 13:03:04.715461 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8jh"] Feb 19 13:03:04 crc kubenswrapper[4833]: I0219 13:03:04.880343 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n28hn\" (UniqueName: \"kubernetes.io/projected/5239266e-48b6-46fa-bc3a-30c8d244db3b-kube-api-access-n28hn\") pod \"redhat-marketplace-jd8jh\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:04 crc kubenswrapper[4833]: I0219 13:03:04.880534 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-utilities\") pod \"redhat-marketplace-jd8jh\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:04 crc kubenswrapper[4833]: I0219 13:03:04.880597 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-catalog-content\") pod \"redhat-marketplace-jd8jh\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:04 crc kubenswrapper[4833]: I0219 13:03:04.982052 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n28hn\" (UniqueName: \"kubernetes.io/projected/5239266e-48b6-46fa-bc3a-30c8d244db3b-kube-api-access-n28hn\") pod \"redhat-marketplace-jd8jh\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:04 crc kubenswrapper[4833]: I0219 13:03:04.982197 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-utilities\") pod \"redhat-marketplace-jd8jh\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:04 crc kubenswrapper[4833]: I0219 13:03:04.982240 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-catalog-content\") pod \"redhat-marketplace-jd8jh\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:04 crc kubenswrapper[4833]: I0219 13:03:04.982942 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-catalog-content\") pod \"redhat-marketplace-jd8jh\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:04 crc kubenswrapper[4833]: I0219 13:03:04.982977 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-utilities\") pod \"redhat-marketplace-jd8jh\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:05 crc kubenswrapper[4833]: I0219 13:03:05.003290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n28hn\" (UniqueName: \"kubernetes.io/projected/5239266e-48b6-46fa-bc3a-30c8d244db3b-kube-api-access-n28hn\") pod \"redhat-marketplace-jd8jh\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:05 crc kubenswrapper[4833]: I0219 13:03:05.034947 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:05 crc kubenswrapper[4833]: I0219 13:03:05.483596 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8jh"] Feb 19 13:03:05 crc kubenswrapper[4833]: I0219 13:03:05.974946 4833 generic.go:334] "Generic (PLEG): container finished" podID="5239266e-48b6-46fa-bc3a-30c8d244db3b" containerID="20cc72891c21af42728020addc6806654334b074da14eb4b77c8032a84909375" exitCode=0 Feb 19 13:03:05 crc kubenswrapper[4833]: I0219 13:03:05.975646 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8jh" event={"ID":"5239266e-48b6-46fa-bc3a-30c8d244db3b","Type":"ContainerDied","Data":"20cc72891c21af42728020addc6806654334b074da14eb4b77c8032a84909375"} Feb 19 13:03:05 crc kubenswrapper[4833]: I0219 13:03:05.975697 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8jh" event={"ID":"5239266e-48b6-46fa-bc3a-30c8d244db3b","Type":"ContainerStarted","Data":"c8d276c14d1e24934db2bb7b3d14c326f88dcbed476bdce13f496c4827548770"} Feb 19 13:03:06 crc kubenswrapper[4833]: I0219 13:03:06.984969 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:03:09 crc kubenswrapper[4833]: I0219 13:03:09.001345 4833 generic.go:334] "Generic (PLEG): container finished" podID="5239266e-48b6-46fa-bc3a-30c8d244db3b" containerID="2ffeb07328f0fcb9fce559128b3a9205bf89853d27acb75c0154bc1081b869b6" exitCode=0 Feb 19 13:03:09 crc kubenswrapper[4833]: I0219 13:03:09.001455 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8jh" event={"ID":"5239266e-48b6-46fa-bc3a-30c8d244db3b","Type":"ContainerDied","Data":"2ffeb07328f0fcb9fce559128b3a9205bf89853d27acb75c0154bc1081b869b6"} Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.566778 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g4tm6"] Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.569784 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.576757 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.576782 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.577045 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-664gm" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.577088 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.582335 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60e9aea3-acf2-43d3-9f95-9c2b714daab3-config\") pod \"dnsmasq-dns-675f4bcbfc-g4tm6\" (UID: \"60e9aea3-acf2-43d3-9f95-9c2b714daab3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.582383 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cqsh\" (UniqueName: \"kubernetes.io/projected/60e9aea3-acf2-43d3-9f95-9c2b714daab3-kube-api-access-4cqsh\") pod \"dnsmasq-dns-675f4bcbfc-g4tm6\" (UID: \"60e9aea3-acf2-43d3-9f95-9c2b714daab3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.584308 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g4tm6"] Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.638086 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9qwvt"] Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.640684 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.642631 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.667038 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9qwvt"] Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.683746 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60e9aea3-acf2-43d3-9f95-9c2b714daab3-config\") pod \"dnsmasq-dns-675f4bcbfc-g4tm6\" (UID: \"60e9aea3-acf2-43d3-9f95-9c2b714daab3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.683822 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cqsh\" (UniqueName: \"kubernetes.io/projected/60e9aea3-acf2-43d3-9f95-9c2b714daab3-kube-api-access-4cqsh\") pod \"dnsmasq-dns-675f4bcbfc-g4tm6\" (UID: \"60e9aea3-acf2-43d3-9f95-9c2b714daab3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.684984 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60e9aea3-acf2-43d3-9f95-9c2b714daab3-config\") pod \"dnsmasq-dns-675f4bcbfc-g4tm6\" (UID: \"60e9aea3-acf2-43d3-9f95-9c2b714daab3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.701751 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cqsh\" (UniqueName: \"kubernetes.io/projected/60e9aea3-acf2-43d3-9f95-9c2b714daab3-kube-api-access-4cqsh\") pod \"dnsmasq-dns-675f4bcbfc-g4tm6\" (UID: \"60e9aea3-acf2-43d3-9f95-9c2b714daab3\") " pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.785524 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv4h7\" (UniqueName: \"kubernetes.io/projected/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-kube-api-access-mv4h7\") pod \"dnsmasq-dns-78dd6ddcc-9qwvt\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.785623 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9qwvt\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.785668 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-config\") pod \"dnsmasq-dns-78dd6ddcc-9qwvt\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.886934 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9qwvt\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.887010 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-config\") pod \"dnsmasq-dns-78dd6ddcc-9qwvt\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.887079 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv4h7\" (UniqueName: \"kubernetes.io/projected/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-kube-api-access-mv4h7\") pod \"dnsmasq-dns-78dd6ddcc-9qwvt\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.888047 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-config\") pod \"dnsmasq-dns-78dd6ddcc-9qwvt\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.888098 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-9qwvt\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.888636 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.906898 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv4h7\" (UniqueName: \"kubernetes.io/projected/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-kube-api-access-mv4h7\") pod \"dnsmasq-dns-78dd6ddcc-9qwvt\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:11 crc kubenswrapper[4833]: I0219 13:03:11.963188 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:12 crc kubenswrapper[4833]: I0219 13:03:12.124088 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g4tm6"] Feb 19 13:03:12 crc kubenswrapper[4833]: W0219 13:03:12.140765 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60e9aea3_acf2_43d3_9f95_9c2b714daab3.slice/crio-21f053589a2eb501aba07f311e2ccfbca2e5e7bbafb7739bef7c999df2ee55c2 WatchSource:0}: Error finding container 21f053589a2eb501aba07f311e2ccfbca2e5e7bbafb7739bef7c999df2ee55c2: Status 404 returned error can't find the container with id 21f053589a2eb501aba07f311e2ccfbca2e5e7bbafb7739bef7c999df2ee55c2 Feb 19 13:03:12 crc kubenswrapper[4833]: I0219 13:03:12.408357 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9qwvt"] Feb 19 13:03:12 crc kubenswrapper[4833]: W0219 13:03:12.409268 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f47b39f_04f6_47b1_9f13_a0ef00e2dad1.slice/crio-ca08854244cab3350ed0e9d0ddee6c615a1c2b6562534d8f05741299f7429a50 WatchSource:0}: Error finding container ca08854244cab3350ed0e9d0ddee6c615a1c2b6562534d8f05741299f7429a50: Status 404 returned error can't find the container with id ca08854244cab3350ed0e9d0ddee6c615a1c2b6562534d8f05741299f7429a50 Feb 19 13:03:13 crc kubenswrapper[4833]: I0219 13:03:13.061386 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8jh" event={"ID":"5239266e-48b6-46fa-bc3a-30c8d244db3b","Type":"ContainerStarted","Data":"97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae"} Feb 19 13:03:13 crc kubenswrapper[4833]: I0219 13:03:13.063160 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" event={"ID":"60e9aea3-acf2-43d3-9f95-9c2b714daab3","Type":"ContainerStarted","Data":"21f053589a2eb501aba07f311e2ccfbca2e5e7bbafb7739bef7c999df2ee55c2"} Feb 19 13:03:13 crc kubenswrapper[4833]: I0219 13:03:13.064891 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" event={"ID":"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1","Type":"ContainerStarted","Data":"ca08854244cab3350ed0e9d0ddee6c615a1c2b6562534d8f05741299f7429a50"} Feb 19 13:03:13 crc kubenswrapper[4833]: I0219 13:03:13.099400 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jd8jh" podStartSLOduration=5.751542608 podStartE2EDuration="9.099378318s" podCreationTimestamp="2026-02-19 13:03:04 +0000 UTC" firstStartedPulling="2026-02-19 13:03:06.984682579 +0000 UTC m=+997.380201367" lastFinishedPulling="2026-02-19 13:03:10.332518269 +0000 UTC m=+1000.728037077" observedRunningTime="2026-02-19 13:03:13.088864969 +0000 UTC m=+1003.484383747" watchObservedRunningTime="2026-02-19 13:03:13.099378318 +0000 UTC m=+1003.494897086" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.209937 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g4tm6"] Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.227964 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q9v7d"] Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.229267 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.236115 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q9v7d"] Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.331546 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-config\") pod \"dnsmasq-dns-666b6646f7-q9v7d\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.331643 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q9v7d\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.331749 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksfqf\" (UniqueName: \"kubernetes.io/projected/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-kube-api-access-ksfqf\") pod \"dnsmasq-dns-666b6646f7-q9v7d\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.435199 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-config\") pod \"dnsmasq-dns-666b6646f7-q9v7d\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.435310 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q9v7d\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.435343 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksfqf\" (UniqueName: \"kubernetes.io/projected/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-kube-api-access-ksfqf\") pod \"dnsmasq-dns-666b6646f7-q9v7d\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.475233 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksfqf\" (UniqueName: \"kubernetes.io/projected/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-kube-api-access-ksfqf\") pod \"dnsmasq-dns-666b6646f7-q9v7d\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.502241 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9qwvt"] Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.525475 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zx2ph"] Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.533687 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.546731 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zx2ph"] Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.638385 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5m9j\" (UniqueName: \"kubernetes.io/projected/5042bc93-330a-4ddf-819f-d772da7b4360-kube-api-access-c5m9j\") pod \"dnsmasq-dns-57d769cc4f-zx2ph\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.638441 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zx2ph\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.638459 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-config\") pod \"dnsmasq-dns-57d769cc4f-zx2ph\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.703816 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-config\") pod \"dnsmasq-dns-666b6646f7-q9v7d\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.706098 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q9v7d\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.739732 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5m9j\" (UniqueName: \"kubernetes.io/projected/5042bc93-330a-4ddf-819f-d772da7b4360-kube-api-access-c5m9j\") pod \"dnsmasq-dns-57d769cc4f-zx2ph\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.739830 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-config\") pod \"dnsmasq-dns-57d769cc4f-zx2ph\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.739853 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zx2ph\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.740887 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-zx2ph\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.740968 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-config\") pod \"dnsmasq-dns-57d769cc4f-zx2ph\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.761087 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5m9j\" (UniqueName: \"kubernetes.io/projected/5042bc93-330a-4ddf-819f-d772da7b4360-kube-api-access-c5m9j\") pod \"dnsmasq-dns-57d769cc4f-zx2ph\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.854095 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:14 crc kubenswrapper[4833]: I0219 13:03:14.883831 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.035655 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.035738 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.094758 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:15 crc kubenswrapper[4833]: W0219 13:03:15.117248 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda63f2d3_04cc_49fd_800a_31ef3a9dfd7c.slice/crio-4547e6d7355c43014a7084db4e698938a754f669f1d7fce8b9534daedcf418a3 WatchSource:0}: Error finding container 4547e6d7355c43014a7084db4e698938a754f669f1d7fce8b9534daedcf418a3: Status 404 returned error can't find the container with id 4547e6d7355c43014a7084db4e698938a754f669f1d7fce8b9534daedcf418a3 Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.120644 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q9v7d"] Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.399393 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zx2ph"] Feb 19 13:03:15 crc kubenswrapper[4833]: W0219 13:03:15.402477 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5042bc93_330a_4ddf_819f_d772da7b4360.slice/crio-05dfb8afe74fc0eeb151a16cc9655eab556c253eda1b0d053485a6d08a4d6168 WatchSource:0}: Error finding container 05dfb8afe74fc0eeb151a16cc9655eab556c253eda1b0d053485a6d08a4d6168: Status 404 returned error can't find the container with id 05dfb8afe74fc0eeb151a16cc9655eab556c253eda1b0d053485a6d08a4d6168 Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.735190 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.736779 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.739887 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.740200 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.740434 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.740593 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.740826 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jr9r6" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.741057 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.741449 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.744892 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.744949 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.744995 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.745690 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44979c86cbc1a1a08268bf3eace13600a4809b3fa1a8321a545736d1f5619e6f"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.745757 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://44979c86cbc1a1a08268bf3eace13600a4809b3fa1a8321a545736d1f5619e6f" gracePeriod=600 Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.748400 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.751748 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.754352 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.754626 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.754806 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.754906 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.755005 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8ddnc" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.755126 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.756318 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.760328 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.778578 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.857842 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.857895 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.857917 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c07579b-ab54-4267-83d6-1d6c0404ba3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.857953 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.857972 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.857989 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858003 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858021 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smgt2\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-kube-api-access-smgt2\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858050 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858070 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858086 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858103 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858124 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858138 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858156 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858183 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858199 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858241 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gql7f\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-kube-api-access-gql7f\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858263 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858284 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c07579b-ab54-4267-83d6-1d6c0404ba3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858323 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.858340 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959292 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gql7f\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-kube-api-access-gql7f\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959629 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959674 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c07579b-ab54-4267-83d6-1d6c0404ba3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959707 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959733 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959756 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959790 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959812 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c07579b-ab54-4267-83d6-1d6c0404ba3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959837 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959861 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959887 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959907 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959932 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smgt2\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-kube-api-access-smgt2\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959956 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.959985 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.960010 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.960035 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.960063 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.960082 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.960103 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.960137 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.960161 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.961565 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.961972 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.962429 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.962709 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.962966 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.963217 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.963521 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.964230 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.965830 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.966055 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.966434 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.967210 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.967624 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.968335 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.970610 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.970659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.971400 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c07579b-ab54-4267-83d6-1d6c0404ba3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.974630 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.977081 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.979441 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c07579b-ab54-4267-83d6-1d6c0404ba3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.991831 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gql7f\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-kube-api-access-gql7f\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.994739 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smgt2\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-kube-api-access-smgt2\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:15 crc kubenswrapper[4833]: I0219 13:03:15.998754 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " pod="openstack/rabbitmq-server-0" Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.002064 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.078665 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.087151 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.092703 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" event={"ID":"5042bc93-330a-4ddf-819f-d772da7b4360","Type":"ContainerStarted","Data":"05dfb8afe74fc0eeb151a16cc9655eab556c253eda1b0d053485a6d08a4d6168"} Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.094331 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" event={"ID":"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c","Type":"ContainerStarted","Data":"4547e6d7355c43014a7084db4e698938a754f669f1d7fce8b9534daedcf418a3"} Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.393134 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:03:16 crc kubenswrapper[4833]: W0219 13:03:16.399873 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c07579b_ab54_4267_83d6_1d6c0404ba3e.slice/crio-32acf339673f51b3a8356c341f3e232bd60b365d32763c70a5cad033d302ba5c WatchSource:0}: Error finding container 32acf339673f51b3a8356c341f3e232bd60b365d32763c70a5cad033d302ba5c: Status 404 returned error can't find the container with id 32acf339673f51b3a8356c341f3e232bd60b365d32763c70a5cad033d302ba5c Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.592410 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:03:16 crc kubenswrapper[4833]: W0219 13:03:16.597948 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda356e13b_39de_4d0b_aa58_f2dc6d3179fb.slice/crio-0d9f54f3a20af8540dc70cec52456b2b60de31fa259ce62e928771149500f80a WatchSource:0}: Error finding container 0d9f54f3a20af8540dc70cec52456b2b60de31fa259ce62e928771149500f80a: Status 404 returned error can't find the container with id 0d9f54f3a20af8540dc70cec52456b2b60de31fa259ce62e928771149500f80a Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.874639 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.876463 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.881519 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.884868 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-9lcr4" Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.885247 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.885443 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.897431 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 19 13:03:16 crc kubenswrapper[4833]: I0219 13:03:16.903746 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.076116 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/679ec18d-1d70-4cc5-8103-b28f0809a45e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.076161 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/679ec18d-1d70-4cc5-8103-b28f0809a45e-config-data-default\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.076207 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/679ec18d-1d70-4cc5-8103-b28f0809a45e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.076232 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.076255 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679ec18d-1d70-4cc5-8103-b28f0809a45e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.076530 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/679ec18d-1d70-4cc5-8103-b28f0809a45e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.076767 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/679ec18d-1d70-4cc5-8103-b28f0809a45e-kolla-config\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.076818 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qldb6\" (UniqueName: \"kubernetes.io/projected/679ec18d-1d70-4cc5-8103-b28f0809a45e-kube-api-access-qldb6\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.127725 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a356e13b-39de-4d0b-aa58-f2dc6d3179fb","Type":"ContainerStarted","Data":"0d9f54f3a20af8540dc70cec52456b2b60de31fa259ce62e928771149500f80a"} Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.158371 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="44979c86cbc1a1a08268bf3eace13600a4809b3fa1a8321a545736d1f5619e6f" exitCode=0 Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.158508 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"44979c86cbc1a1a08268bf3eace13600a4809b3fa1a8321a545736d1f5619e6f"} Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.158542 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"79901fa015c98a89f8eb5d748d58a779eb4aed74d086040cca560575f94233a9"} Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.158560 4833 scope.go:117] "RemoveContainer" containerID="cd9eac9e9427e5822654e34b25e68666ba752339a3fe6cb1abe9c3e947b8e9ba" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.168763 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c07579b-ab54-4267-83d6-1d6c0404ba3e","Type":"ContainerStarted","Data":"32acf339673f51b3a8356c341f3e232bd60b365d32763c70a5cad033d302ba5c"} Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.177686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/679ec18d-1d70-4cc5-8103-b28f0809a45e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.177751 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.177790 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679ec18d-1d70-4cc5-8103-b28f0809a45e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.177850 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/679ec18d-1d70-4cc5-8103-b28f0809a45e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.177903 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/679ec18d-1d70-4cc5-8103-b28f0809a45e-kolla-config\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.177929 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qldb6\" (UniqueName: \"kubernetes.io/projected/679ec18d-1d70-4cc5-8103-b28f0809a45e-kube-api-access-qldb6\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.177997 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/679ec18d-1d70-4cc5-8103-b28f0809a45e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.178017 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/679ec18d-1d70-4cc5-8103-b28f0809a45e-config-data-default\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.178919 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/679ec18d-1d70-4cc5-8103-b28f0809a45e-config-data-default\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.179418 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.181043 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/679ec18d-1d70-4cc5-8103-b28f0809a45e-kolla-config\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.181965 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/679ec18d-1d70-4cc5-8103-b28f0809a45e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.191342 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/679ec18d-1d70-4cc5-8103-b28f0809a45e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.230844 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/679ec18d-1d70-4cc5-8103-b28f0809a45e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.233173 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/679ec18d-1d70-4cc5-8103-b28f0809a45e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.287392 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qldb6\" (UniqueName: \"kubernetes.io/projected/679ec18d-1d70-4cc5-8103-b28f0809a45e-kube-api-access-qldb6\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.346686 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"679ec18d-1d70-4cc5-8103-b28f0809a45e\") " pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.352605 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.429380 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8jh"] Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.497642 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 19 13:03:17 crc kubenswrapper[4833]: I0219 13:03:17.981599 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.191259 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"679ec18d-1d70-4cc5-8103-b28f0809a45e","Type":"ContainerStarted","Data":"a73dad18f16a10d8fd97500efee2ae30d4045358f5a20aab06237bacba383cb7"} Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.342488 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.345455 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.352446 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.352681 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.352823 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.356470 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vtpc7" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.365489 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.407164 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.409842 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.412138 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kctml" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.412405 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.412651 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.415097 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.447170 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866102e5-b1c2-4f33-9c34-312be44faea7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.447214 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/866102e5-b1c2-4f33-9c34-312be44faea7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.447241 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/866102e5-b1c2-4f33-9c34-312be44faea7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.447258 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/866102e5-b1c2-4f33-9c34-312be44faea7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.447307 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866102e5-b1c2-4f33-9c34-312be44faea7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.447330 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwxp8\" (UniqueName: \"kubernetes.io/projected/866102e5-b1c2-4f33-9c34-312be44faea7-kube-api-access-xwxp8\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.447350 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/866102e5-b1c2-4f33-9c34-312be44faea7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.447382 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.548788 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/866102e5-b1c2-4f33-9c34-312be44faea7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.548839 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c04a00b-8613-472f-bf1b-e1d26ed34312-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.548865 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.548898 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c04a00b-8613-472f-bf1b-e1d26ed34312-kolla-config\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.548917 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866102e5-b1c2-4f33-9c34-312be44faea7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.548943 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6q5r\" (UniqueName: \"kubernetes.io/projected/8c04a00b-8613-472f-bf1b-e1d26ed34312-kube-api-access-t6q5r\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.548961 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/866102e5-b1c2-4f33-9c34-312be44faea7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.548978 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c04a00b-8613-472f-bf1b-e1d26ed34312-config-data\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.549006 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/866102e5-b1c2-4f33-9c34-312be44faea7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.549023 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/866102e5-b1c2-4f33-9c34-312be44faea7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.549062 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c04a00b-8613-472f-bf1b-e1d26ed34312-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.549087 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866102e5-b1c2-4f33-9c34-312be44faea7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.549108 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwxp8\" (UniqueName: \"kubernetes.io/projected/866102e5-b1c2-4f33-9c34-312be44faea7-kube-api-access-xwxp8\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.549779 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/866102e5-b1c2-4f33-9c34-312be44faea7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.550019 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.555066 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/866102e5-b1c2-4f33-9c34-312be44faea7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.555767 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/866102e5-b1c2-4f33-9c34-312be44faea7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.557697 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/866102e5-b1c2-4f33-9c34-312be44faea7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.564201 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/866102e5-b1c2-4f33-9c34-312be44faea7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.566506 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866102e5-b1c2-4f33-9c34-312be44faea7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.566641 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwxp8\" (UniqueName: \"kubernetes.io/projected/866102e5-b1c2-4f33-9c34-312be44faea7-kube-api-access-xwxp8\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.577739 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"866102e5-b1c2-4f33-9c34-312be44faea7\") " pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.650330 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c04a00b-8613-472f-bf1b-e1d26ed34312-kolla-config\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.650421 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6q5r\" (UniqueName: \"kubernetes.io/projected/8c04a00b-8613-472f-bf1b-e1d26ed34312-kube-api-access-t6q5r\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.650454 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c04a00b-8613-472f-bf1b-e1d26ed34312-config-data\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.650514 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c04a00b-8613-472f-bf1b-e1d26ed34312-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.650567 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c04a00b-8613-472f-bf1b-e1d26ed34312-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.654347 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c04a00b-8613-472f-bf1b-e1d26ed34312-config-data\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.655058 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c04a00b-8613-472f-bf1b-e1d26ed34312-combined-ca-bundle\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.653676 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8c04a00b-8613-472f-bf1b-e1d26ed34312-kolla-config\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.657097 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c04a00b-8613-472f-bf1b-e1d26ed34312-memcached-tls-certs\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.670163 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6q5r\" (UniqueName: \"kubernetes.io/projected/8c04a00b-8613-472f-bf1b-e1d26ed34312-kube-api-access-t6q5r\") pod \"memcached-0\" (UID: \"8c04a00b-8613-472f-bf1b-e1d26ed34312\") " pod="openstack/memcached-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.683506 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:18 crc kubenswrapper[4833]: I0219 13:03:18.745128 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 19 13:03:19 crc kubenswrapper[4833]: I0219 13:03:19.070721 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 19 13:03:19 crc kubenswrapper[4833]: I0219 13:03:19.197839 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8c04a00b-8613-472f-bf1b-e1d26ed34312","Type":"ContainerStarted","Data":"9262d06955c28832096413a604e14f22766a945700c08ef5a62879b890484dc9"} Feb 19 13:03:19 crc kubenswrapper[4833]: I0219 13:03:19.198073 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jd8jh" podUID="5239266e-48b6-46fa-bc3a-30c8d244db3b" containerName="registry-server" containerID="cri-o://97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae" gracePeriod=2 Feb 19 13:03:19 crc kubenswrapper[4833]: I0219 13:03:19.329375 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 19 13:03:19 crc kubenswrapper[4833]: W0219 13:03:19.368354 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod866102e5_b1c2_4f33_9c34_312be44faea7.slice/crio-6aaad0fdf79c2e1057450308b4e739a7682347c1429d8a2d9d62c83501853686 WatchSource:0}: Error finding container 6aaad0fdf79c2e1057450308b4e739a7682347c1429d8a2d9d62c83501853686: Status 404 returned error can't find the container with id 6aaad0fdf79c2e1057450308b4e739a7682347c1429d8a2d9d62c83501853686 Feb 19 13:03:19 crc kubenswrapper[4833]: I0219 13:03:19.892589 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:19 crc kubenswrapper[4833]: I0219 13:03:19.974464 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-catalog-content\") pod \"5239266e-48b6-46fa-bc3a-30c8d244db3b\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " Feb 19 13:03:19 crc kubenswrapper[4833]: I0219 13:03:19.974560 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-utilities\") pod \"5239266e-48b6-46fa-bc3a-30c8d244db3b\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " Feb 19 13:03:19 crc kubenswrapper[4833]: I0219 13:03:19.976350 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-utilities" (OuterVolumeSpecName: "utilities") pod "5239266e-48b6-46fa-bc3a-30c8d244db3b" (UID: "5239266e-48b6-46fa-bc3a-30c8d244db3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:03:19 crc kubenswrapper[4833]: I0219 13:03:19.976590 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n28hn\" (UniqueName: \"kubernetes.io/projected/5239266e-48b6-46fa-bc3a-30c8d244db3b-kube-api-access-n28hn\") pod \"5239266e-48b6-46fa-bc3a-30c8d244db3b\" (UID: \"5239266e-48b6-46fa-bc3a-30c8d244db3b\") " Feb 19 13:03:19 crc kubenswrapper[4833]: I0219 13:03:19.979571 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:19.999196 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5239266e-48b6-46fa-bc3a-30c8d244db3b-kube-api-access-n28hn" (OuterVolumeSpecName: "kube-api-access-n28hn") pod "5239266e-48b6-46fa-bc3a-30c8d244db3b" (UID: "5239266e-48b6-46fa-bc3a-30c8d244db3b"). InnerVolumeSpecName "kube-api-access-n28hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.081715 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n28hn\" (UniqueName: \"kubernetes.io/projected/5239266e-48b6-46fa-bc3a-30c8d244db3b-kube-api-access-n28hn\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.098202 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5239266e-48b6-46fa-bc3a-30c8d244db3b" (UID: "5239266e-48b6-46fa-bc3a-30c8d244db3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.185398 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5239266e-48b6-46fa-bc3a-30c8d244db3b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.228463 4833 generic.go:334] "Generic (PLEG): container finished" podID="5239266e-48b6-46fa-bc3a-30c8d244db3b" containerID="97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae" exitCode=0 Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.228557 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd8jh" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.228546 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8jh" event={"ID":"5239266e-48b6-46fa-bc3a-30c8d244db3b","Type":"ContainerDied","Data":"97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae"} Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.228663 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd8jh" event={"ID":"5239266e-48b6-46fa-bc3a-30c8d244db3b","Type":"ContainerDied","Data":"c8d276c14d1e24934db2bb7b3d14c326f88dcbed476bdce13f496c4827548770"} Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.228688 4833 scope.go:117] "RemoveContainer" containerID="97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.235673 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"866102e5-b1c2-4f33-9c34-312be44faea7","Type":"ContainerStarted","Data":"6aaad0fdf79c2e1057450308b4e739a7682347c1429d8a2d9d62c83501853686"} Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.270579 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8jh"] Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.279872 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd8jh"] Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.293358 4833 scope.go:117] "RemoveContainer" containerID="2ffeb07328f0fcb9fce559128b3a9205bf89853d27acb75c0154bc1081b869b6" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.325482 4833 scope.go:117] "RemoveContainer" containerID="20cc72891c21af42728020addc6806654334b074da14eb4b77c8032a84909375" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.344634 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5239266e-48b6-46fa-bc3a-30c8d244db3b" path="/var/lib/kubelet/pods/5239266e-48b6-46fa-bc3a-30c8d244db3b/volumes" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.379399 4833 scope.go:117] "RemoveContainer" containerID="97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae" Feb 19 13:03:20 crc kubenswrapper[4833]: E0219 13:03:20.380086 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae\": container with ID starting with 97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae not found: ID does not exist" containerID="97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.380141 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae"} err="failed to get container status \"97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae\": rpc error: code = NotFound desc = could not find container \"97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae\": container with ID starting with 97b6c6a87fabe42cbeb46f9c46b16de0f05d536f22273e12080a325f60939dae not found: ID does not exist" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.380168 4833 scope.go:117] "RemoveContainer" containerID="2ffeb07328f0fcb9fce559128b3a9205bf89853d27acb75c0154bc1081b869b6" Feb 19 13:03:20 crc kubenswrapper[4833]: E0219 13:03:20.380638 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ffeb07328f0fcb9fce559128b3a9205bf89853d27acb75c0154bc1081b869b6\": container with ID starting with 2ffeb07328f0fcb9fce559128b3a9205bf89853d27acb75c0154bc1081b869b6 not found: ID does not exist" containerID="2ffeb07328f0fcb9fce559128b3a9205bf89853d27acb75c0154bc1081b869b6" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.380670 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ffeb07328f0fcb9fce559128b3a9205bf89853d27acb75c0154bc1081b869b6"} err="failed to get container status \"2ffeb07328f0fcb9fce559128b3a9205bf89853d27acb75c0154bc1081b869b6\": rpc error: code = NotFound desc = could not find container \"2ffeb07328f0fcb9fce559128b3a9205bf89853d27acb75c0154bc1081b869b6\": container with ID starting with 2ffeb07328f0fcb9fce559128b3a9205bf89853d27acb75c0154bc1081b869b6 not found: ID does not exist" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.380711 4833 scope.go:117] "RemoveContainer" containerID="20cc72891c21af42728020addc6806654334b074da14eb4b77c8032a84909375" Feb 19 13:03:20 crc kubenswrapper[4833]: E0219 13:03:20.381361 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20cc72891c21af42728020addc6806654334b074da14eb4b77c8032a84909375\": container with ID starting with 20cc72891c21af42728020addc6806654334b074da14eb4b77c8032a84909375 not found: ID does not exist" containerID="20cc72891c21af42728020addc6806654334b074da14eb4b77c8032a84909375" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.381398 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20cc72891c21af42728020addc6806654334b074da14eb4b77c8032a84909375"} err="failed to get container status \"20cc72891c21af42728020addc6806654334b074da14eb4b77c8032a84909375\": rpc error: code = NotFound desc = could not find container \"20cc72891c21af42728020addc6806654334b074da14eb4b77c8032a84909375\": container with ID starting with 20cc72891c21af42728020addc6806654334b074da14eb4b77c8032a84909375 not found: ID does not exist" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.722896 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:03:20 crc kubenswrapper[4833]: E0219 13:03:20.723185 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5239266e-48b6-46fa-bc3a-30c8d244db3b" containerName="extract-utilities" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.723201 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5239266e-48b6-46fa-bc3a-30c8d244db3b" containerName="extract-utilities" Feb 19 13:03:20 crc kubenswrapper[4833]: E0219 13:03:20.723214 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5239266e-48b6-46fa-bc3a-30c8d244db3b" containerName="extract-content" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.723221 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5239266e-48b6-46fa-bc3a-30c8d244db3b" containerName="extract-content" Feb 19 13:03:20 crc kubenswrapper[4833]: E0219 13:03:20.723240 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5239266e-48b6-46fa-bc3a-30c8d244db3b" containerName="registry-server" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.723246 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5239266e-48b6-46fa-bc3a-30c8d244db3b" containerName="registry-server" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.723384 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5239266e-48b6-46fa-bc3a-30c8d244db3b" containerName="registry-server" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.723962 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.728195 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-ffvlq" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.733997 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.800592 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwzpm\" (UniqueName: \"kubernetes.io/projected/4635da17-a051-4ef3-a8e3-f0dc7996cf17-kube-api-access-hwzpm\") pod \"kube-state-metrics-0\" (UID: \"4635da17-a051-4ef3-a8e3-f0dc7996cf17\") " pod="openstack/kube-state-metrics-0" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.902615 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwzpm\" (UniqueName: \"kubernetes.io/projected/4635da17-a051-4ef3-a8e3-f0dc7996cf17-kube-api-access-hwzpm\") pod \"kube-state-metrics-0\" (UID: \"4635da17-a051-4ef3-a8e3-f0dc7996cf17\") " pod="openstack/kube-state-metrics-0" Feb 19 13:03:20 crc kubenswrapper[4833]: I0219 13:03:20.936270 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwzpm\" (UniqueName: \"kubernetes.io/projected/4635da17-a051-4ef3-a8e3-f0dc7996cf17-kube-api-access-hwzpm\") pod \"kube-state-metrics-0\" (UID: \"4635da17-a051-4ef3-a8e3-f0dc7996cf17\") " pod="openstack/kube-state-metrics-0" Feb 19 13:03:21 crc kubenswrapper[4833]: I0219 13:03:21.053437 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:03:22 crc kubenswrapper[4833]: I0219 13:03:22.553915 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.000153 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fbrgv"] Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.001331 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.003252 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.004094 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.004269 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-lkbj2" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.006737 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-9jlg7"] Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.008267 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.016718 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fbrgv"] Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.030780 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9jlg7"] Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056307 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/488bba31-e718-4ef1-bd04-6ed3fe165c89-ovn-controller-tls-certs\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056413 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/488bba31-e718-4ef1-bd04-6ed3fe165c89-var-run\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7p8q\" (UniqueName: \"kubernetes.io/projected/488bba31-e718-4ef1-bd04-6ed3fe165c89-kube-api-access-g7p8q\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056623 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/488bba31-e718-4ef1-bd04-6ed3fe165c89-var-run-ovn\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056685 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/488bba31-e718-4ef1-bd04-6ed3fe165c89-var-log-ovn\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056746 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afdbcf60-89d8-426e-9323-1347d5cb238f-scripts\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056762 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/488bba31-e718-4ef1-bd04-6ed3fe165c89-scripts\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056818 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488bba31-e718-4ef1-bd04-6ed3fe165c89-combined-ca-bundle\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056835 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-var-lib\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056872 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-etc-ovs\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056889 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-var-log\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zzpt\" (UniqueName: \"kubernetes.io/projected/afdbcf60-89d8-426e-9323-1347d5cb238f-kube-api-access-9zzpt\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.056987 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-var-run\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.158691 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afdbcf60-89d8-426e-9323-1347d5cb238f-scripts\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.158752 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/488bba31-e718-4ef1-bd04-6ed3fe165c89-scripts\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.158789 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488bba31-e718-4ef1-bd04-6ed3fe165c89-combined-ca-bundle\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.158839 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-etc-ovs\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.158903 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-var-lib\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.158920 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-var-log\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.158956 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zzpt\" (UniqueName: \"kubernetes.io/projected/afdbcf60-89d8-426e-9323-1347d5cb238f-kube-api-access-9zzpt\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159035 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-var-run\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159209 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/488bba31-e718-4ef1-bd04-6ed3fe165c89-ovn-controller-tls-certs\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159241 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/488bba31-e718-4ef1-bd04-6ed3fe165c89-var-run\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159262 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7p8q\" (UniqueName: \"kubernetes.io/projected/488bba31-e718-4ef1-bd04-6ed3fe165c89-kube-api-access-g7p8q\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159313 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/488bba31-e718-4ef1-bd04-6ed3fe165c89-var-run-ovn\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159436 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/488bba31-e718-4ef1-bd04-6ed3fe165c89-var-log-ovn\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159660 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-var-lib\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159710 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/488bba31-e718-4ef1-bd04-6ed3fe165c89-var-run-ovn\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159766 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/488bba31-e718-4ef1-bd04-6ed3fe165c89-var-run\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159834 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-etc-ovs\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159868 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-var-log\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.159937 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/afdbcf60-89d8-426e-9323-1347d5cb238f-var-run\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.160031 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/488bba31-e718-4ef1-bd04-6ed3fe165c89-var-log-ovn\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.165800 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/afdbcf60-89d8-426e-9323-1347d5cb238f-scripts\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.166302 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/488bba31-e718-4ef1-bd04-6ed3fe165c89-combined-ca-bundle\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.166623 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/488bba31-e718-4ef1-bd04-6ed3fe165c89-scripts\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.170570 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/488bba31-e718-4ef1-bd04-6ed3fe165c89-ovn-controller-tls-certs\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.178222 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7p8q\" (UniqueName: \"kubernetes.io/projected/488bba31-e718-4ef1-bd04-6ed3fe165c89-kube-api-access-g7p8q\") pod \"ovn-controller-fbrgv\" (UID: \"488bba31-e718-4ef1-bd04-6ed3fe165c89\") " pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.189702 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zzpt\" (UniqueName: \"kubernetes.io/projected/afdbcf60-89d8-426e-9323-1347d5cb238f-kube-api-access-9zzpt\") pod \"ovn-controller-ovs-9jlg7\" (UID: \"afdbcf60-89d8-426e-9323-1347d5cb238f\") " pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.337176 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.349034 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.918783 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.919947 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.922603 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.922819 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.922944 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.924079 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-jfwxm" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.924205 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.929326 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.976791 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55636a14-4194-419e-be9c-d4f8c4064d77-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.976971 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55636a14-4194-419e-be9c-d4f8c4064d77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.977007 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55636a14-4194-419e-be9c-d4f8c4064d77-config\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.977146 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.977188 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7tw5\" (UniqueName: \"kubernetes.io/projected/55636a14-4194-419e-be9c-d4f8c4064d77-kube-api-access-n7tw5\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.977226 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55636a14-4194-419e-be9c-d4f8c4064d77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.977264 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55636a14-4194-419e-be9c-d4f8c4064d77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:24 crc kubenswrapper[4833]: I0219 13:03:24.977287 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55636a14-4194-419e-be9c-d4f8c4064d77-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.078979 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55636a14-4194-419e-be9c-d4f8c4064d77-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.079047 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55636a14-4194-419e-be9c-d4f8c4064d77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.079096 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55636a14-4194-419e-be9c-d4f8c4064d77-config\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.079146 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.079165 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7tw5\" (UniqueName: \"kubernetes.io/projected/55636a14-4194-419e-be9c-d4f8c4064d77-kube-api-access-n7tw5\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.079189 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55636a14-4194-419e-be9c-d4f8c4064d77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.079208 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55636a14-4194-419e-be9c-d4f8c4064d77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.079273 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55636a14-4194-419e-be9c-d4f8c4064d77-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.080270 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55636a14-4194-419e-be9c-d4f8c4064d77-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.080389 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55636a14-4194-419e-be9c-d4f8c4064d77-config\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.080654 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.081139 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55636a14-4194-419e-be9c-d4f8c4064d77-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.084584 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55636a14-4194-419e-be9c-d4f8c4064d77-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.090781 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55636a14-4194-419e-be9c-d4f8c4064d77-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.091809 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55636a14-4194-419e-be9c-d4f8c4064d77-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.096008 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7tw5\" (UniqueName: \"kubernetes.io/projected/55636a14-4194-419e-be9c-d4f8c4064d77-kube-api-access-n7tw5\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.104580 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"55636a14-4194-419e-be9c-d4f8c4064d77\") " pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:25 crc kubenswrapper[4833]: I0219 13:03:25.244836 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:26 crc kubenswrapper[4833]: I0219 13:03:26.294554 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4635da17-a051-4ef3-a8e3-f0dc7996cf17","Type":"ContainerStarted","Data":"6c2a150dbbce8d3db59a2f6157b2a397e68f0d2239701d268a463d1e71749218"} Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.397588 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.399890 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.402256 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zkfbg" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.402589 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.402904 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.407707 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.409126 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.517206 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.517252 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5ea761-d056-4868-af9f-309486208889-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.517284 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c5ea761-d056-4868-af9f-309486208889-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.517571 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c5ea761-d056-4868-af9f-309486208889-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.517652 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28g2l\" (UniqueName: \"kubernetes.io/projected/1c5ea761-d056-4868-af9f-309486208889-kube-api-access-28g2l\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.517789 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c5ea761-d056-4868-af9f-309486208889-config\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.517858 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5ea761-d056-4868-af9f-309486208889-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.517894 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5ea761-d056-4868-af9f-309486208889-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.619408 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c5ea761-d056-4868-af9f-309486208889-config\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.619595 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5ea761-d056-4868-af9f-309486208889-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.619636 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5ea761-d056-4868-af9f-309486208889-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.619693 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.619723 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5ea761-d056-4868-af9f-309486208889-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.619761 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c5ea761-d056-4868-af9f-309486208889-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.619907 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c5ea761-d056-4868-af9f-309486208889-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.619939 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28g2l\" (UniqueName: \"kubernetes.io/projected/1c5ea761-d056-4868-af9f-309486208889-kube-api-access-28g2l\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.620119 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.620910 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1c5ea761-d056-4868-af9f-309486208889-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.621312 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c5ea761-d056-4868-af9f-309486208889-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.621560 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c5ea761-d056-4868-af9f-309486208889-config\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.627237 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5ea761-d056-4868-af9f-309486208889-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.628858 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c5ea761-d056-4868-af9f-309486208889-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.629103 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1c5ea761-d056-4868-af9f-309486208889-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.641958 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.645600 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28g2l\" (UniqueName: \"kubernetes.io/projected/1c5ea761-d056-4868-af9f-309486208889-kube-api-access-28g2l\") pod \"ovsdbserver-sb-0\" (UID: \"1c5ea761-d056-4868-af9f-309486208889\") " pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:27 crc kubenswrapper[4833]: I0219 13:03:27.720806 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:39 crc kubenswrapper[4833]: E0219 13:03:39.311879 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 19 13:03:39 crc kubenswrapper[4833]: E0219 13:03:39.312603 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gql7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(a356e13b-39de-4d0b-aa58-f2dc6d3179fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:03:39 crc kubenswrapper[4833]: E0219 13:03:39.313914 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a356e13b-39de-4d0b-aa58-f2dc6d3179fb" Feb 19 13:03:39 crc kubenswrapper[4833]: E0219 13:03:39.407102 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a356e13b-39de-4d0b-aa58-f2dc6d3179fb" Feb 19 13:03:40 crc kubenswrapper[4833]: E0219 13:03:40.874127 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 13:03:40 crc kubenswrapper[4833]: E0219 13:03:40.874740 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mv4h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-9qwvt_openstack(1f47b39f-04f6-47b1-9f13-a0ef00e2dad1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:03:40 crc kubenswrapper[4833]: E0219 13:03:40.876752 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" podUID="1f47b39f-04f6-47b1-9f13-a0ef00e2dad1" Feb 19 13:03:40 crc kubenswrapper[4833]: E0219 13:03:40.878440 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 13:03:40 crc kubenswrapper[4833]: E0219 13:03:40.879098 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ksfqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-q9v7d_openstack(da63f2d3-04cc-49fd-800a-31ef3a9dfd7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:03:40 crc kubenswrapper[4833]: E0219 13:03:40.880707 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" podUID="da63f2d3-04cc-49fd-800a-31ef3a9dfd7c" Feb 19 13:03:40 crc kubenswrapper[4833]: E0219 13:03:40.900848 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 13:03:40 crc kubenswrapper[4833]: E0219 13:03:40.901000 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5m9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-zx2ph_openstack(5042bc93-330a-4ddf-819f-d772da7b4360): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:03:40 crc kubenswrapper[4833]: E0219 13:03:40.902281 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" podUID="5042bc93-330a-4ddf-819f-d772da7b4360" Feb 19 13:03:41 crc kubenswrapper[4833]: E0219 13:03:41.019110 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 19 13:03:41 crc kubenswrapper[4833]: E0219 13:03:41.019263 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cqsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-g4tm6_openstack(60e9aea3-acf2-43d3-9f95-9c2b714daab3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:03:41 crc kubenswrapper[4833]: E0219 13:03:41.021429 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" podUID="60e9aea3-acf2-43d3-9f95-9c2b714daab3" Feb 19 13:03:41 crc kubenswrapper[4833]: I0219 13:03:41.379580 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-9jlg7"] Feb 19 13:03:41 crc kubenswrapper[4833]: I0219 13:03:41.388261 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fbrgv"] Feb 19 13:03:41 crc kubenswrapper[4833]: I0219 13:03:41.434472 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fbrgv" event={"ID":"488bba31-e718-4ef1-bd04-6ed3fe165c89","Type":"ContainerStarted","Data":"f16e52362e8f9490249c0704c2ec9b1b2068367b1eb240de62a91af6892ad220"} Feb 19 13:03:41 crc kubenswrapper[4833]: I0219 13:03:41.436440 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jlg7" event={"ID":"afdbcf60-89d8-426e-9323-1347d5cb238f","Type":"ContainerStarted","Data":"ee87d9366ef67043d47975a227038e55f4cc373d62c1bbf487e3d4709a2106b5"} Feb 19 13:03:41 crc kubenswrapper[4833]: E0219 13:03:41.440271 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" podUID="5042bc93-330a-4ddf-819f-d772da7b4360" Feb 19 13:03:41 crc kubenswrapper[4833]: E0219 13:03:41.440294 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" podUID="da63f2d3-04cc-49fd-800a-31ef3a9dfd7c" Feb 19 13:03:41 crc kubenswrapper[4833]: I0219 13:03:41.498981 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 19 13:03:41 crc kubenswrapper[4833]: I0219 13:03:41.913699 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 19 13:03:41 crc kubenswrapper[4833]: I0219 13:03:41.915790 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:41 crc kubenswrapper[4833]: I0219 13:03:41.922885 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.072653 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-dns-svc\") pod \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.073087 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-config\") pod \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.073151 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv4h7\" (UniqueName: \"kubernetes.io/projected/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-kube-api-access-mv4h7\") pod \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\" (UID: \"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1\") " Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.073229 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1f47b39f-04f6-47b1-9f13-a0ef00e2dad1" (UID: "1f47b39f-04f6-47b1-9f13-a0ef00e2dad1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.073286 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60e9aea3-acf2-43d3-9f95-9c2b714daab3-config\") pod \"60e9aea3-acf2-43d3-9f95-9c2b714daab3\" (UID: \"60e9aea3-acf2-43d3-9f95-9c2b714daab3\") " Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.073413 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cqsh\" (UniqueName: \"kubernetes.io/projected/60e9aea3-acf2-43d3-9f95-9c2b714daab3-kube-api-access-4cqsh\") pod \"60e9aea3-acf2-43d3-9f95-9c2b714daab3\" (UID: \"60e9aea3-acf2-43d3-9f95-9c2b714daab3\") " Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.073666 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-config" (OuterVolumeSpecName: "config") pod "1f47b39f-04f6-47b1-9f13-a0ef00e2dad1" (UID: "1f47b39f-04f6-47b1-9f13-a0ef00e2dad1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.073876 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.073893 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.074746 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60e9aea3-acf2-43d3-9f95-9c2b714daab3-config" (OuterVolumeSpecName: "config") pod "60e9aea3-acf2-43d3-9f95-9c2b714daab3" (UID: "60e9aea3-acf2-43d3-9f95-9c2b714daab3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.079190 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60e9aea3-acf2-43d3-9f95-9c2b714daab3-kube-api-access-4cqsh" (OuterVolumeSpecName: "kube-api-access-4cqsh") pod "60e9aea3-acf2-43d3-9f95-9c2b714daab3" (UID: "60e9aea3-acf2-43d3-9f95-9c2b714daab3"). InnerVolumeSpecName "kube-api-access-4cqsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.079684 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-kube-api-access-mv4h7" (OuterVolumeSpecName: "kube-api-access-mv4h7") pod "1f47b39f-04f6-47b1-9f13-a0ef00e2dad1" (UID: "1f47b39f-04f6-47b1-9f13-a0ef00e2dad1"). InnerVolumeSpecName "kube-api-access-mv4h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.176254 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cqsh\" (UniqueName: \"kubernetes.io/projected/60e9aea3-acf2-43d3-9f95-9c2b714daab3-kube-api-access-4cqsh\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.176288 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv4h7\" (UniqueName: \"kubernetes.io/projected/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1-kube-api-access-mv4h7\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.176303 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60e9aea3-acf2-43d3-9f95-9c2b714daab3-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.445174 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1c5ea761-d056-4868-af9f-309486208889","Type":"ContainerStarted","Data":"00edb4397272a668cdedb86da69cd58d9d010917375b45377fa66e59a6a9c0c5"} Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.446614 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" event={"ID":"60e9aea3-acf2-43d3-9f95-9c2b714daab3","Type":"ContainerDied","Data":"21f053589a2eb501aba07f311e2ccfbca2e5e7bbafb7739bef7c999df2ee55c2"} Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.446645 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-g4tm6" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.448269 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.448283 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-9qwvt" event={"ID":"1f47b39f-04f6-47b1-9f13-a0ef00e2dad1","Type":"ContainerDied","Data":"ca08854244cab3350ed0e9d0ddee6c615a1c2b6562534d8f05741299f7429a50"} Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.450036 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55636a14-4194-419e-be9c-d4f8c4064d77","Type":"ContainerStarted","Data":"69f631ec6837b3aec4963191c421ed6e5e11bfc6b48178569dd18870a5d6ccec"} Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.494740 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g4tm6"] Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.499479 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-g4tm6"] Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.530686 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9qwvt"] Feb 19 13:03:42 crc kubenswrapper[4833]: I0219 13:03:42.537111 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-9qwvt"] Feb 19 13:03:43 crc kubenswrapper[4833]: I0219 13:03:43.465308 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"679ec18d-1d70-4cc5-8103-b28f0809a45e","Type":"ContainerStarted","Data":"205832ec9570459d3094dc4d72c5f72fc533d4752f32882ffb73f9ceb03d06c3"} Feb 19 13:03:43 crc kubenswrapper[4833]: I0219 13:03:43.467881 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c07579b-ab54-4267-83d6-1d6c0404ba3e","Type":"ContainerStarted","Data":"dba3d1413758072442d9ddfac05eb89afbc310bb7af8b791a2712c2c48b11986"} Feb 19 13:03:43 crc kubenswrapper[4833]: I0219 13:03:43.469815 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"8c04a00b-8613-472f-bf1b-e1d26ed34312","Type":"ContainerStarted","Data":"cb648188a96ea4a85ddc80004c113decabfbcf7e163fafee80daa55e58979a77"} Feb 19 13:03:43 crc kubenswrapper[4833]: I0219 13:03:43.469945 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 19 13:03:43 crc kubenswrapper[4833]: I0219 13:03:43.473406 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4635da17-a051-4ef3-a8e3-f0dc7996cf17","Type":"ContainerStarted","Data":"d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7"} Feb 19 13:03:43 crc kubenswrapper[4833]: I0219 13:03:43.473442 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 13:03:43 crc kubenswrapper[4833]: I0219 13:03:43.475007 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"866102e5-b1c2-4f33-9c34-312be44faea7","Type":"ContainerStarted","Data":"c82a8141acfd9b411bfa45675e947ec0fcc46db602d39a5ed951925224f8c353"} Feb 19 13:03:43 crc kubenswrapper[4833]: I0219 13:03:43.575645 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=6.403211294 podStartE2EDuration="23.575626926s" podCreationTimestamp="2026-02-19 13:03:20 +0000 UTC" firstStartedPulling="2026-02-19 13:03:25.571581714 +0000 UTC m=+1015.967100472" lastFinishedPulling="2026-02-19 13:03:42.743997336 +0000 UTC m=+1033.139516104" observedRunningTime="2026-02-19 13:03:43.550483918 +0000 UTC m=+1033.946002686" watchObservedRunningTime="2026-02-19 13:03:43.575626926 +0000 UTC m=+1033.971145684" Feb 19 13:03:43 crc kubenswrapper[4833]: I0219 13:03:43.584890 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.2591241 podStartE2EDuration="25.584872457s" podCreationTimestamp="2026-02-19 13:03:18 +0000 UTC" firstStartedPulling="2026-02-19 13:03:19.083419873 +0000 UTC m=+1009.478938641" lastFinishedPulling="2026-02-19 13:03:41.40916823 +0000 UTC m=+1031.804686998" observedRunningTime="2026-02-19 13:03:43.57584474 +0000 UTC m=+1033.971363508" watchObservedRunningTime="2026-02-19 13:03:43.584872457 +0000 UTC m=+1033.980391225" Feb 19 13:03:44 crc kubenswrapper[4833]: I0219 13:03:44.322033 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f47b39f-04f6-47b1-9f13-a0ef00e2dad1" path="/var/lib/kubelet/pods/1f47b39f-04f6-47b1-9f13-a0ef00e2dad1/volumes" Feb 19 13:03:44 crc kubenswrapper[4833]: I0219 13:03:44.322396 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60e9aea3-acf2-43d3-9f95-9c2b714daab3" path="/var/lib/kubelet/pods/60e9aea3-acf2-43d3-9f95-9c2b714daab3/volumes" Feb 19 13:03:45 crc kubenswrapper[4833]: I0219 13:03:45.489136 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1c5ea761-d056-4868-af9f-309486208889","Type":"ContainerStarted","Data":"8da391c1f37804ee2333894e725f38f378c7f69341d00ce3cf5d8203b1f5f131"} Feb 19 13:03:45 crc kubenswrapper[4833]: I0219 13:03:45.492199 4833 generic.go:334] "Generic (PLEG): container finished" podID="afdbcf60-89d8-426e-9323-1347d5cb238f" containerID="84a72f99b5593c27b4eb9ae4b56626fd13e9a333a2df203edecff3845b14be55" exitCode=0 Feb 19 13:03:45 crc kubenswrapper[4833]: I0219 13:03:45.492291 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jlg7" event={"ID":"afdbcf60-89d8-426e-9323-1347d5cb238f","Type":"ContainerDied","Data":"84a72f99b5593c27b4eb9ae4b56626fd13e9a333a2df203edecff3845b14be55"} Feb 19 13:03:45 crc kubenswrapper[4833]: I0219 13:03:45.494637 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fbrgv" event={"ID":"488bba31-e718-4ef1-bd04-6ed3fe165c89","Type":"ContainerStarted","Data":"8ac47cf14faa2f35833fa28fd96f1b0671bdd55772493eaf6dde18a01edc7594"} Feb 19 13:03:45 crc kubenswrapper[4833]: I0219 13:03:45.494738 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-fbrgv" Feb 19 13:03:45 crc kubenswrapper[4833]: I0219 13:03:45.496163 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55636a14-4194-419e-be9c-d4f8c4064d77","Type":"ContainerStarted","Data":"a741283984feab3ec2f5e2d2da8ade736d92ef505e450a5a8b91f6780d33b4e9"} Feb 19 13:03:45 crc kubenswrapper[4833]: I0219 13:03:45.553045 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-fbrgv" podStartSLOduration=19.057668941 podStartE2EDuration="22.553025594s" podCreationTimestamp="2026-02-19 13:03:23 +0000 UTC" firstStartedPulling="2026-02-19 13:03:41.421299094 +0000 UTC m=+1031.816817862" lastFinishedPulling="2026-02-19 13:03:44.916655727 +0000 UTC m=+1035.312174515" observedRunningTime="2026-02-19 13:03:45.549540748 +0000 UTC m=+1035.945059536" watchObservedRunningTime="2026-02-19 13:03:45.553025594 +0000 UTC m=+1035.948544372" Feb 19 13:03:46 crc kubenswrapper[4833]: I0219 13:03:46.511459 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jlg7" event={"ID":"afdbcf60-89d8-426e-9323-1347d5cb238f","Type":"ContainerStarted","Data":"d14741fe6c2a668f2843d666beb04eb572967a9dcabecc983c137061f359cdaf"} Feb 19 13:03:46 crc kubenswrapper[4833]: I0219 13:03:46.511868 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-9jlg7" event={"ID":"afdbcf60-89d8-426e-9323-1347d5cb238f","Type":"ContainerStarted","Data":"5aca838a446c9939f20c024ce58962d7ae8e16f4ba05866c7dc5b04f3310e268"} Feb 19 13:03:46 crc kubenswrapper[4833]: I0219 13:03:46.512453 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:46 crc kubenswrapper[4833]: I0219 13:03:46.512519 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:03:46 crc kubenswrapper[4833]: I0219 13:03:46.513588 4833 generic.go:334] "Generic (PLEG): container finished" podID="866102e5-b1c2-4f33-9c34-312be44faea7" containerID="c82a8141acfd9b411bfa45675e947ec0fcc46db602d39a5ed951925224f8c353" exitCode=0 Feb 19 13:03:46 crc kubenswrapper[4833]: I0219 13:03:46.513674 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"866102e5-b1c2-4f33-9c34-312be44faea7","Type":"ContainerDied","Data":"c82a8141acfd9b411bfa45675e947ec0fcc46db602d39a5ed951925224f8c353"} Feb 19 13:03:46 crc kubenswrapper[4833]: I0219 13:03:46.518218 4833 generic.go:334] "Generic (PLEG): container finished" podID="679ec18d-1d70-4cc5-8103-b28f0809a45e" containerID="205832ec9570459d3094dc4d72c5f72fc533d4752f32882ffb73f9ceb03d06c3" exitCode=0 Feb 19 13:03:46 crc kubenswrapper[4833]: I0219 13:03:46.518342 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"679ec18d-1d70-4cc5-8103-b28f0809a45e","Type":"ContainerDied","Data":"205832ec9570459d3094dc4d72c5f72fc533d4752f32882ffb73f9ceb03d06c3"} Feb 19 13:03:46 crc kubenswrapper[4833]: I0219 13:03:46.535186 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-9jlg7" podStartSLOduration=20.037009545 podStartE2EDuration="23.53516817s" podCreationTimestamp="2026-02-19 13:03:23 +0000 UTC" firstStartedPulling="2026-02-19 13:03:41.416956299 +0000 UTC m=+1031.812475067" lastFinishedPulling="2026-02-19 13:03:44.915114924 +0000 UTC m=+1035.310633692" observedRunningTime="2026-02-19 13:03:46.534739771 +0000 UTC m=+1036.930258539" watchObservedRunningTime="2026-02-19 13:03:46.53516817 +0000 UTC m=+1036.930686938" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.130854 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-d6gmv"] Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.132299 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.134692 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.168595 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-d6gmv"] Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.274415 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba309e83-ab80-44b0-95a6-01034dfcca68-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.274856 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8tm\" (UniqueName: \"kubernetes.io/projected/ba309e83-ab80-44b0-95a6-01034dfcca68-kube-api-access-ld8tm\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.274932 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ba309e83-ab80-44b0-95a6-01034dfcca68-ovs-rundir\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.275021 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba309e83-ab80-44b0-95a6-01034dfcca68-combined-ca-bundle\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.275087 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba309e83-ab80-44b0-95a6-01034dfcca68-config\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.275163 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ba309e83-ab80-44b0-95a6-01034dfcca68-ovn-rundir\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.376483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld8tm\" (UniqueName: \"kubernetes.io/projected/ba309e83-ab80-44b0-95a6-01034dfcca68-kube-api-access-ld8tm\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.376770 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ba309e83-ab80-44b0-95a6-01034dfcca68-ovs-rundir\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.376816 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba309e83-ab80-44b0-95a6-01034dfcca68-combined-ca-bundle\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.376850 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba309e83-ab80-44b0-95a6-01034dfcca68-config\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.376868 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ba309e83-ab80-44b0-95a6-01034dfcca68-ovn-rundir\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.376889 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba309e83-ab80-44b0-95a6-01034dfcca68-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.377088 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ba309e83-ab80-44b0-95a6-01034dfcca68-ovs-rundir\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.377252 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ba309e83-ab80-44b0-95a6-01034dfcca68-ovn-rundir\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.377605 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba309e83-ab80-44b0-95a6-01034dfcca68-config\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.381885 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba309e83-ab80-44b0-95a6-01034dfcca68-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.382351 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba309e83-ab80-44b0-95a6-01034dfcca68-combined-ca-bundle\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.393416 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zx2ph"] Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.396330 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld8tm\" (UniqueName: \"kubernetes.io/projected/ba309e83-ab80-44b0-95a6-01034dfcca68-kube-api-access-ld8tm\") pod \"ovn-controller-metrics-d6gmv\" (UID: \"ba309e83-ab80-44b0-95a6-01034dfcca68\") " pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.459381 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-mvc4l"] Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.460637 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.464764 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.482313 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-d6gmv" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.482891 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-mvc4l"] Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.544608 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"866102e5-b1c2-4f33-9c34-312be44faea7","Type":"ContainerStarted","Data":"c6d030e75d0a93822c7ede6848e3416f84fce76bfe18bf4088202d8b044f2aa1"} Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.567195 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"679ec18d-1d70-4cc5-8103-b28f0809a45e","Type":"ContainerStarted","Data":"a5313e999433df51c336945877bed126c4a50c4ba8c491d72a763a8bd7448c7e"} Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.573633 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55636a14-4194-419e-be9c-d4f8c4064d77","Type":"ContainerStarted","Data":"6038325fc016d9cac8a569a0a2ee97a6df482c106852927cbafd974305f854e7"} Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.578198 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q9v7d"] Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.580350 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.543006312 podStartE2EDuration="30.580334409s" podCreationTimestamp="2026-02-19 13:03:17 +0000 UTC" firstStartedPulling="2026-02-19 13:03:19.371884814 +0000 UTC m=+1009.767403582" lastFinishedPulling="2026-02-19 13:03:41.409212911 +0000 UTC m=+1031.804731679" observedRunningTime="2026-02-19 13:03:47.568060772 +0000 UTC m=+1037.963579550" watchObservedRunningTime="2026-02-19 13:03:47.580334409 +0000 UTC m=+1037.975853177" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.580673 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.580760 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-config\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.580792 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.580854 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sknjk\" (UniqueName: \"kubernetes.io/projected/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-kube-api-access-sknjk\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.597732 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-fcs9l"] Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.601482 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1c5ea761-d056-4868-af9f-309486208889","Type":"ContainerStarted","Data":"9d5225356818d4cefc6c2a552bcc6f303388e104503d69c718ee45ec5c5706a3"} Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.601639 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.606858 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fcs9l"] Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.612015 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.615288 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.203358091 podStartE2EDuration="32.61526552s" podCreationTimestamp="2026-02-19 13:03:15 +0000 UTC" firstStartedPulling="2026-02-19 13:03:17.997267551 +0000 UTC m=+1008.392786319" lastFinishedPulling="2026-02-19 13:03:41.40917498 +0000 UTC m=+1031.804693748" observedRunningTime="2026-02-19 13:03:47.610832693 +0000 UTC m=+1038.006351461" watchObservedRunningTime="2026-02-19 13:03:47.61526552 +0000 UTC m=+1038.010784288" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.651404 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.899901792 podStartE2EDuration="24.651385947s" podCreationTimestamp="2026-02-19 13:03:23 +0000 UTC" firstStartedPulling="2026-02-19 13:03:41.974116212 +0000 UTC m=+1032.369634980" lastFinishedPulling="2026-02-19 13:03:46.725600377 +0000 UTC m=+1037.121119135" observedRunningTime="2026-02-19 13:03:47.632030665 +0000 UTC m=+1038.027549433" watchObservedRunningTime="2026-02-19 13:03:47.651385947 +0000 UTC m=+1038.046904715" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.685944 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.686209 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-dns-svc\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.686329 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-config\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.686361 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.686400 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-config\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.686438 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsdcq\" (UniqueName: \"kubernetes.io/projected/36e91fa6-1254-4adb-afeb-736f39dc7e88-kube-api-access-xsdcq\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.686468 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sknjk\" (UniqueName: \"kubernetes.io/projected/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-kube-api-access-sknjk\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.686484 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.690374 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=16.616924529 podStartE2EDuration="21.690357755s" podCreationTimestamp="2026-02-19 13:03:26 +0000 UTC" firstStartedPulling="2026-02-19 13:03:41.628725511 +0000 UTC m=+1032.024244269" lastFinishedPulling="2026-02-19 13:03:46.702158727 +0000 UTC m=+1037.097677495" observedRunningTime="2026-02-19 13:03:47.686022451 +0000 UTC m=+1038.081541219" watchObservedRunningTime="2026-02-19 13:03:47.690357755 +0000 UTC m=+1038.085876523" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.691883 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-config\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.692541 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.692837 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.693408 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.719392 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sknjk\" (UniqueName: \"kubernetes.io/projected/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-kube-api-access-sknjk\") pod \"dnsmasq-dns-6bc7876d45-mvc4l\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.723414 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.737949 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.792182 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.793902 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-config\") pod \"5042bc93-330a-4ddf-819f-d772da7b4360\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.793956 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5m9j\" (UniqueName: \"kubernetes.io/projected/5042bc93-330a-4ddf-819f-d772da7b4360-kube-api-access-c5m9j\") pod \"5042bc93-330a-4ddf-819f-d772da7b4360\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.793975 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-dns-svc\") pod \"5042bc93-330a-4ddf-819f-d772da7b4360\" (UID: \"5042bc93-330a-4ddf-819f-d772da7b4360\") " Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.794219 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-config\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.794247 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsdcq\" (UniqueName: \"kubernetes.io/projected/36e91fa6-1254-4adb-afeb-736f39dc7e88-kube-api-access-xsdcq\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.794273 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.794325 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.794355 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-dns-svc\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.795275 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-dns-svc\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.795291 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.795445 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-config\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.796033 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-config" (OuterVolumeSpecName: "config") pod "5042bc93-330a-4ddf-819f-d772da7b4360" (UID: "5042bc93-330a-4ddf-819f-d772da7b4360"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.796460 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5042bc93-330a-4ddf-819f-d772da7b4360" (UID: "5042bc93-330a-4ddf-819f-d772da7b4360"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.796914 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.799658 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5042bc93-330a-4ddf-819f-d772da7b4360-kube-api-access-c5m9j" (OuterVolumeSpecName: "kube-api-access-c5m9j") pod "5042bc93-330a-4ddf-819f-d772da7b4360" (UID: "5042bc93-330a-4ddf-819f-d772da7b4360"). InnerVolumeSpecName "kube-api-access-c5m9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.815305 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsdcq\" (UniqueName: \"kubernetes.io/projected/36e91fa6-1254-4adb-afeb-736f39dc7e88-kube-api-access-xsdcq\") pod \"dnsmasq-dns-8554648995-fcs9l\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.898740 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.898772 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5m9j\" (UniqueName: \"kubernetes.io/projected/5042bc93-330a-4ddf-819f-d772da7b4360-kube-api-access-c5m9j\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.898781 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5042bc93-330a-4ddf-819f-d772da7b4360-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.906479 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:47 crc kubenswrapper[4833]: I0219 13:03:47.929663 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:47.999868 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-config\") pod \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.000232 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksfqf\" (UniqueName: \"kubernetes.io/projected/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-kube-api-access-ksfqf\") pod \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.000273 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-dns-svc\") pod \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\" (UID: \"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c\") " Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.000728 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-config" (OuterVolumeSpecName: "config") pod "da63f2d3-04cc-49fd-800a-31ef3a9dfd7c" (UID: "da63f2d3-04cc-49fd-800a-31ef3a9dfd7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.000979 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da63f2d3-04cc-49fd-800a-31ef3a9dfd7c" (UID: "da63f2d3-04cc-49fd-800a-31ef3a9dfd7c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.020071 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-kube-api-access-ksfqf" (OuterVolumeSpecName: "kube-api-access-ksfqf") pod "da63f2d3-04cc-49fd-800a-31ef3a9dfd7c" (UID: "da63f2d3-04cc-49fd-800a-31ef3a9dfd7c"). InnerVolumeSpecName "kube-api-access-ksfqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.062955 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-d6gmv"] Feb 19 13:03:48 crc kubenswrapper[4833]: W0219 13:03:48.077675 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba309e83_ab80_44b0_95a6_01034dfcca68.slice/crio-3c76c3caba7ad4a98347173971303d7f4b78633a4468cd8e6bce77817ab43cf5 WatchSource:0}: Error finding container 3c76c3caba7ad4a98347173971303d7f4b78633a4468cd8e6bce77817ab43cf5: Status 404 returned error can't find the container with id 3c76c3caba7ad4a98347173971303d7f4b78633a4468cd8e6bce77817ab43cf5 Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.102346 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksfqf\" (UniqueName: \"kubernetes.io/projected/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-kube-api-access-ksfqf\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.102426 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.102440 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.153631 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fcs9l"] Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.249218 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-mvc4l"] Feb 19 13:03:48 crc kubenswrapper[4833]: W0219 13:03:48.253285 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod91fd3e8f_0f4e_4fa4_939f_e09f4f75c408.slice/crio-35fe27d074922a4ae27a2d5f7a9fd0ab15cf442c157077e84311d3336046881d WatchSource:0}: Error finding container 35fe27d074922a4ae27a2d5f7a9fd0ab15cf442c157077e84311d3336046881d: Status 404 returned error can't find the container with id 35fe27d074922a4ae27a2d5f7a9fd0ab15cf442c157077e84311d3336046881d Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.617897 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" event={"ID":"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408","Type":"ContainerStarted","Data":"35fe27d074922a4ae27a2d5f7a9fd0ab15cf442c157077e84311d3336046881d"} Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.620555 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-d6gmv" event={"ID":"ba309e83-ab80-44b0-95a6-01034dfcca68","Type":"ContainerStarted","Data":"9e9bf8935a0d7cbc13ac8a7477326ba3b961f5b337598b2ce728493e5f84ad28"} Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.620578 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-d6gmv" event={"ID":"ba309e83-ab80-44b0-95a6-01034dfcca68","Type":"ContainerStarted","Data":"3c76c3caba7ad4a98347173971303d7f4b78633a4468cd8e6bce77817ab43cf5"} Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.622140 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fcs9l" event={"ID":"36e91fa6-1254-4adb-afeb-736f39dc7e88","Type":"ContainerStarted","Data":"ebdf422d560b9155ef98fc748ec85a7cb6c3262b3e1b90eff685fbeeff34486a"} Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.623908 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" event={"ID":"5042bc93-330a-4ddf-819f-d772da7b4360","Type":"ContainerDied","Data":"05dfb8afe74fc0eeb151a16cc9655eab556c253eda1b0d053485a6d08a4d6168"} Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.624019 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-zx2ph" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.626593 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" event={"ID":"da63f2d3-04cc-49fd-800a-31ef3a9dfd7c","Type":"ContainerDied","Data":"4547e6d7355c43014a7084db4e698938a754f669f1d7fce8b9534daedcf418a3"} Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.627439 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q9v7d" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.669904 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q9v7d"] Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.681859 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q9v7d"] Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.684557 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.684787 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.700451 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zx2ph"] Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.706899 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-zx2ph"] Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.721563 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.746640 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 19 13:03:48 crc kubenswrapper[4833]: I0219 13:03:48.757275 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:49 crc kubenswrapper[4833]: I0219 13:03:49.245796 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:49 crc kubenswrapper[4833]: I0219 13:03:49.285432 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:49 crc kubenswrapper[4833]: I0219 13:03:49.635622 4833 generic.go:334] "Generic (PLEG): container finished" podID="91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" containerID="95da9a4c804e17f53b7f8fc05e553ec407281005941e1879dd09152ec6c0594c" exitCode=0 Feb 19 13:03:49 crc kubenswrapper[4833]: I0219 13:03:49.635704 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" event={"ID":"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408","Type":"ContainerDied","Data":"95da9a4c804e17f53b7f8fc05e553ec407281005941e1879dd09152ec6c0594c"} Feb 19 13:03:49 crc kubenswrapper[4833]: I0219 13:03:49.637792 4833 generic.go:334] "Generic (PLEG): container finished" podID="36e91fa6-1254-4adb-afeb-736f39dc7e88" containerID="d57efa3b2d63c7cb77379426365006be7ad44302b2d0986101da3504ae17e998" exitCode=0 Feb 19 13:03:49 crc kubenswrapper[4833]: I0219 13:03:49.637862 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fcs9l" event={"ID":"36e91fa6-1254-4adb-afeb-736f39dc7e88","Type":"ContainerDied","Data":"d57efa3b2d63c7cb77379426365006be7ad44302b2d0986101da3504ae17e998"} Feb 19 13:03:49 crc kubenswrapper[4833]: I0219 13:03:49.639893 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:49 crc kubenswrapper[4833]: I0219 13:03:49.721526 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-d6gmv" podStartSLOduration=2.721487264 podStartE2EDuration="2.721487264s" podCreationTimestamp="2026-02-19 13:03:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:03:49.707641741 +0000 UTC m=+1040.103160539" watchObservedRunningTime="2026-02-19 13:03:49.721487264 +0000 UTC m=+1040.117006032" Feb 19 13:03:50 crc kubenswrapper[4833]: I0219 13:03:50.297202 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 19 13:03:50 crc kubenswrapper[4833]: I0219 13:03:50.327971 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5042bc93-330a-4ddf-819f-d772da7b4360" path="/var/lib/kubelet/pods/5042bc93-330a-4ddf-819f-d772da7b4360/volumes" Feb 19 13:03:50 crc kubenswrapper[4833]: I0219 13:03:50.328350 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da63f2d3-04cc-49fd-800a-31ef3a9dfd7c" path="/var/lib/kubelet/pods/da63f2d3-04cc-49fd-800a-31ef3a9dfd7c/volumes" Feb 19 13:03:50 crc kubenswrapper[4833]: I0219 13:03:50.649792 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fcs9l" event={"ID":"36e91fa6-1254-4adb-afeb-736f39dc7e88","Type":"ContainerStarted","Data":"f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e"} Feb 19 13:03:50 crc kubenswrapper[4833]: I0219 13:03:50.650124 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:50 crc kubenswrapper[4833]: I0219 13:03:50.651971 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" event={"ID":"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408","Type":"ContainerStarted","Data":"e589a42651d566d59f127f2d2da1097f03189d353e97434ee440973d05d307f4"} Feb 19 13:03:50 crc kubenswrapper[4833]: I0219 13:03:50.652462 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:03:50 crc kubenswrapper[4833]: I0219 13:03:50.693445 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-fcs9l" podStartSLOduration=2.835874595 podStartE2EDuration="3.693420229s" podCreationTimestamp="2026-02-19 13:03:47 +0000 UTC" firstStartedPulling="2026-02-19 13:03:48.160818619 +0000 UTC m=+1038.556337387" lastFinishedPulling="2026-02-19 13:03:49.018364253 +0000 UTC m=+1039.413883021" observedRunningTime="2026-02-19 13:03:50.682835389 +0000 UTC m=+1041.078354197" watchObservedRunningTime="2026-02-19 13:03:50.693420229 +0000 UTC m=+1041.088939027" Feb 19 13:03:50 crc kubenswrapper[4833]: I0219 13:03:50.708788 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" podStartSLOduration=2.900538767 podStartE2EDuration="3.708769501s" podCreationTimestamp="2026-02-19 13:03:47 +0000 UTC" firstStartedPulling="2026-02-19 13:03:48.256251746 +0000 UTC m=+1038.651770524" lastFinishedPulling="2026-02-19 13:03:49.06448249 +0000 UTC m=+1039.460001258" observedRunningTime="2026-02-19 13:03:50.702790038 +0000 UTC m=+1041.098308806" watchObservedRunningTime="2026-02-19 13:03:50.708769501 +0000 UTC m=+1041.104288279" Feb 19 13:03:50 crc kubenswrapper[4833]: I0219 13:03:50.715314 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 19 13:03:50 crc kubenswrapper[4833]: I0219 13:03:50.975617 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-mvc4l"] Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.008327 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-p9qjz"] Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.009544 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.031848 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-p9qjz"] Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.063419 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.081087 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.082428 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.088348 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.088389 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.088675 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jjm97" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.088843 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.102826 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153677 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e175eae-03fe-4c4b-b5d2-96df10844449-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153731 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e175eae-03fe-4c4b-b5d2-96df10844449-config\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153754 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e175eae-03fe-4c4b-b5d2-96df10844449-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153773 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153802 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-config\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153816 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153831 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e175eae-03fe-4c4b-b5d2-96df10844449-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153858 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153896 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t95bd\" (UniqueName: \"kubernetes.io/projected/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-kube-api-access-t95bd\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153921 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e175eae-03fe-4c4b-b5d2-96df10844449-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153950 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e175eae-03fe-4c4b-b5d2-96df10844449-scripts\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.153978 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxkkv\" (UniqueName: \"kubernetes.io/projected/1e175eae-03fe-4c4b-b5d2-96df10844449-kube-api-access-cxkkv\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257562 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e175eae-03fe-4c4b-b5d2-96df10844449-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257631 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e175eae-03fe-4c4b-b5d2-96df10844449-config\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257667 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e175eae-03fe-4c4b-b5d2-96df10844449-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257695 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257728 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-config\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257750 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257773 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e175eae-03fe-4c4b-b5d2-96df10844449-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257797 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257851 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t95bd\" (UniqueName: \"kubernetes.io/projected/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-kube-api-access-t95bd\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257886 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e175eae-03fe-4c4b-b5d2-96df10844449-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257928 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e175eae-03fe-4c4b-b5d2-96df10844449-scripts\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.257968 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxkkv\" (UniqueName: \"kubernetes.io/projected/1e175eae-03fe-4c4b-b5d2-96df10844449-kube-api-access-cxkkv\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.258767 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1e175eae-03fe-4c4b-b5d2-96df10844449-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.259281 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.259308 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.259371 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.259520 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-config\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.259559 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1e175eae-03fe-4c4b-b5d2-96df10844449-scripts\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.259708 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e175eae-03fe-4c4b-b5d2-96df10844449-config\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.263180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e175eae-03fe-4c4b-b5d2-96df10844449-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.263516 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e175eae-03fe-4c4b-b5d2-96df10844449-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.274315 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e175eae-03fe-4c4b-b5d2-96df10844449-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.280078 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t95bd\" (UniqueName: \"kubernetes.io/projected/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-kube-api-access-t95bd\") pod \"dnsmasq-dns-b8fbc5445-p9qjz\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.280456 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxkkv\" (UniqueName: \"kubernetes.io/projected/1e175eae-03fe-4c4b-b5d2-96df10844449-kube-api-access-cxkkv\") pod \"ovn-northd-0\" (UID: \"1e175eae-03fe-4c4b-b5d2-96df10844449\") " pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.355176 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.405072 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 19 13:03:51 crc kubenswrapper[4833]: E0219 13:03:51.430757 4833 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.222:36524->38.102.83.222:38639: read tcp 38.102.83.222:36524->38.102.83.222:38639: read: connection reset by peer Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.787750 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-p9qjz"] Feb 19 13:03:51 crc kubenswrapper[4833]: W0219 13:03:51.790947 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e56bcf5_cac7_4e98_a3b0_43430ecf891e.slice/crio-2354d65209b98a594726e38f3192fa08b5cc904bd06cea48af78af6f9a362b48 WatchSource:0}: Error finding container 2354d65209b98a594726e38f3192fa08b5cc904bd06cea48af78af6f9a362b48: Status 404 returned error can't find the container with id 2354d65209b98a594726e38f3192fa08b5cc904bd06cea48af78af6f9a362b48 Feb 19 13:03:51 crc kubenswrapper[4833]: I0219 13:03:51.890740 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 19 13:03:51 crc kubenswrapper[4833]: W0219 13:03:51.892834 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e175eae_03fe_4c4b_b5d2_96df10844449.slice/crio-cf8dca8f4108764c8c91a7a619c409df5ad8c0ed2ae5e7ecc20da7bbc5b8995e WatchSource:0}: Error finding container cf8dca8f4108764c8c91a7a619c409df5ad8c0ed2ae5e7ecc20da7bbc5b8995e: Status 404 returned error can't find the container with id cf8dca8f4108764c8c91a7a619c409df5ad8c0ed2ae5e7ecc20da7bbc5b8995e Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.193048 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.201168 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.204309 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.205106 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.205215 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.205463 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7xrsv" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.229530 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.275101 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7lr\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-kube-api-access-gx7lr\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.275205 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-cache\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.275248 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-lock\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.275284 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.275314 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.275373 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.377575 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx7lr\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-kube-api-access-gx7lr\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.377649 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-cache\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.377690 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-lock\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.377732 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.377767 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.377842 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: E0219 13:03:52.378033 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 13:03:52 crc kubenswrapper[4833]: E0219 13:03:52.378051 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 13:03:52 crc kubenswrapper[4833]: E0219 13:03:52.378104 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift podName:0dfc7a49-4c64-4c4c-b0a9-eea1d8734612 nodeName:}" failed. No retries permitted until 2026-02-19 13:03:52.8780834 +0000 UTC m=+1043.273602178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift") pod "swift-storage-0" (UID: "0dfc7a49-4c64-4c4c-b0a9-eea1d8734612") : configmap "swift-ring-files" not found Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.379413 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-cache\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.379761 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-lock\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.380870 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.396873 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.399521 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx7lr\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-kube-api-access-gx7lr\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.441617 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.679998 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" event={"ID":"5e56bcf5-cac7-4e98-a3b0-43430ecf891e","Type":"ContainerStarted","Data":"2354d65209b98a594726e38f3192fa08b5cc904bd06cea48af78af6f9a362b48"} Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.685707 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e175eae-03fe-4c4b-b5d2-96df10844449","Type":"ContainerStarted","Data":"cf8dca8f4108764c8c91a7a619c409df5ad8c0ed2ae5e7ecc20da7bbc5b8995e"} Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.685870 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" podUID="91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" containerName="dnsmasq-dns" containerID="cri-o://e589a42651d566d59f127f2d2da1097f03189d353e97434ee440973d05d307f4" gracePeriod=10 Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.765923 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.849876 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 19 13:03:52 crc kubenswrapper[4833]: I0219 13:03:52.884692 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:52 crc kubenswrapper[4833]: E0219 13:03:52.884982 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 13:03:52 crc kubenswrapper[4833]: E0219 13:03:52.885028 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 13:03:52 crc kubenswrapper[4833]: E0219 13:03:52.885094 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift podName:0dfc7a49-4c64-4c4c-b0a9-eea1d8734612 nodeName:}" failed. No retries permitted until 2026-02-19 13:03:53.885069964 +0000 UTC m=+1044.280588742 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift") pod "swift-storage-0" (UID: "0dfc7a49-4c64-4c4c-b0a9-eea1d8734612") : configmap "swift-ring-files" not found Feb 19 13:03:53 crc kubenswrapper[4833]: I0219 13:03:53.696725 4833 generic.go:334] "Generic (PLEG): container finished" podID="91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" containerID="e589a42651d566d59f127f2d2da1097f03189d353e97434ee440973d05d307f4" exitCode=0 Feb 19 13:03:53 crc kubenswrapper[4833]: I0219 13:03:53.696856 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" event={"ID":"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408","Type":"ContainerDied","Data":"e589a42651d566d59f127f2d2da1097f03189d353e97434ee440973d05d307f4"} Feb 19 13:03:53 crc kubenswrapper[4833]: I0219 13:03:53.900087 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:53 crc kubenswrapper[4833]: E0219 13:03:53.900255 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 13:03:53 crc kubenswrapper[4833]: E0219 13:03:53.900272 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 13:03:53 crc kubenswrapper[4833]: E0219 13:03:53.900327 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift podName:0dfc7a49-4c64-4c4c-b0a9-eea1d8734612 nodeName:}" failed. No retries permitted until 2026-02-19 13:03:55.900307924 +0000 UTC m=+1046.295826692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift") pod "swift-storage-0" (UID: "0dfc7a49-4c64-4c4c-b0a9-eea1d8734612") : configmap "swift-ring-files" not found Feb 19 13:03:55 crc kubenswrapper[4833]: I0219 13:03:55.933893 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:03:55 crc kubenswrapper[4833]: E0219 13:03:55.934129 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 13:03:55 crc kubenswrapper[4833]: E0219 13:03:55.934533 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 13:03:55 crc kubenswrapper[4833]: E0219 13:03:55.934580 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift podName:0dfc7a49-4c64-4c4c-b0a9-eea1d8734612 nodeName:}" failed. No retries permitted until 2026-02-19 13:03:59.934566131 +0000 UTC m=+1050.330084899 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift") pod "swift-storage-0" (UID: "0dfc7a49-4c64-4c4c-b0a9-eea1d8734612") : configmap "swift-ring-files" not found Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.116273 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-q8vmz"] Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.117291 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.122143 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.122143 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.123049 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.137188 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw6vr\" (UniqueName: \"kubernetes.io/projected/b7c5f9c7-2c80-4685-912c-d3660f7306ef-kube-api-access-zw6vr\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.137259 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-swiftconf\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.137308 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-scripts\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.137333 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-dispersionconf\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.137383 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b7c5f9c7-2c80-4685-912c-d3660f7306ef-etc-swift\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.137413 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-combined-ca-bundle\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.137467 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-ring-data-devices\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.148290 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-q8vmz"] Feb 19 13:03:56 crc kubenswrapper[4833]: E0219 13:03:56.149189 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-zw6vr ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-q8vmz" podUID="b7c5f9c7-2c80-4685-912c-d3660f7306ef" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.166955 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-p2m8p"] Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.168276 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.176455 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-p2m8p"] Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.186802 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-q8vmz"] Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.238937 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-scripts\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.238990 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-dispersionconf\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239021 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b7c5f9c7-2c80-4685-912c-d3660f7306ef-etc-swift\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239054 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-combined-ca-bundle\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239076 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-scripts\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239097 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-ring-data-devices\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239138 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-ring-data-devices\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239176 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-swiftconf\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239265 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-combined-ca-bundle\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239327 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clvwz\" (UniqueName: \"kubernetes.io/projected/46126eda-f691-4339-966c-615190176dea-kube-api-access-clvwz\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239421 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46126eda-f691-4339-966c-615190176dea-etc-swift\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239545 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-dispersionconf\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239596 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw6vr\" (UniqueName: \"kubernetes.io/projected/b7c5f9c7-2c80-4685-912c-d3660f7306ef-kube-api-access-zw6vr\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.239670 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-swiftconf\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.240018 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b7c5f9c7-2c80-4685-912c-d3660f7306ef-etc-swift\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.240240 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-scripts\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.240830 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-ring-data-devices\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.246674 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-swiftconf\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.251761 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-combined-ca-bundle\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.258714 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-dispersionconf\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.259873 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw6vr\" (UniqueName: \"kubernetes.io/projected/b7c5f9c7-2c80-4685-912c-d3660f7306ef-kube-api-access-zw6vr\") pod \"swift-ring-rebalance-q8vmz\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.341256 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-scripts\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.341686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-ring-data-devices\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.341753 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-swiftconf\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.341799 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-combined-ca-bundle\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.341823 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clvwz\" (UniqueName: \"kubernetes.io/projected/46126eda-f691-4339-966c-615190176dea-kube-api-access-clvwz\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.341860 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46126eda-f691-4339-966c-615190176dea-etc-swift\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.341898 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-dispersionconf\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.342237 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-scripts\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.342442 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46126eda-f691-4339-966c-615190176dea-etc-swift\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.342779 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-ring-data-devices\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.345101 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-swiftconf\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.345391 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-dispersionconf\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.345848 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-combined-ca-bundle\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.360540 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clvwz\" (UniqueName: \"kubernetes.io/projected/46126eda-f691-4339-966c-615190176dea-kube-api-access-clvwz\") pod \"swift-ring-rebalance-p2m8p\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.496888 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.718091 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.734935 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.748893 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw6vr\" (UniqueName: \"kubernetes.io/projected/b7c5f9c7-2c80-4685-912c-d3660f7306ef-kube-api-access-zw6vr\") pod \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.748963 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-scripts\") pod \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.748998 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-swiftconf\") pod \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.749042 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b7c5f9c7-2c80-4685-912c-d3660f7306ef-etc-swift\") pod \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.749597 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-dispersionconf\") pod \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.749670 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7c5f9c7-2c80-4685-912c-d3660f7306ef-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b7c5f9c7-2c80-4685-912c-d3660f7306ef" (UID: "b7c5f9c7-2c80-4685-912c-d3660f7306ef"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.749713 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-combined-ca-bundle\") pod \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.749774 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-ring-data-devices\") pod \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\" (UID: \"b7c5f9c7-2c80-4685-912c-d3660f7306ef\") " Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.750277 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b7c5f9c7-2c80-4685-912c-d3660f7306ef" (UID: "b7c5f9c7-2c80-4685-912c-d3660f7306ef"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.750716 4833 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.750776 4833 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b7c5f9c7-2c80-4685-912c-d3660f7306ef-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.750922 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-scripts" (OuterVolumeSpecName: "scripts") pod "b7c5f9c7-2c80-4685-912c-d3660f7306ef" (UID: "b7c5f9c7-2c80-4685-912c-d3660f7306ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.754830 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7c5f9c7-2c80-4685-912c-d3660f7306ef-kube-api-access-zw6vr" (OuterVolumeSpecName: "kube-api-access-zw6vr") pod "b7c5f9c7-2c80-4685-912c-d3660f7306ef" (UID: "b7c5f9c7-2c80-4685-912c-d3660f7306ef"). InnerVolumeSpecName "kube-api-access-zw6vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.754924 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b7c5f9c7-2c80-4685-912c-d3660f7306ef" (UID: "b7c5f9c7-2c80-4685-912c-d3660f7306ef"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.755441 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7c5f9c7-2c80-4685-912c-d3660f7306ef" (UID: "b7c5f9c7-2c80-4685-912c-d3660f7306ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.769476 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b7c5f9c7-2c80-4685-912c-d3660f7306ef" (UID: "b7c5f9c7-2c80-4685-912c-d3660f7306ef"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.851960 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw6vr\" (UniqueName: \"kubernetes.io/projected/b7c5f9c7-2c80-4685-912c-d3660f7306ef-kube-api-access-zw6vr\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.852387 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b7c5f9c7-2c80-4685-912c-d3660f7306ef-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.852407 4833 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.852426 4833 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.852444 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7c5f9c7-2c80-4685-912c-d3660f7306ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:03:56 crc kubenswrapper[4833]: I0219 13:03:56.992385 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-p2m8p"] Feb 19 13:03:56 crc kubenswrapper[4833]: W0219 13:03:56.998209 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46126eda_f691_4339_966c_615190176dea.slice/crio-9b4b57f99fd475671ebee262a01baa244b5ca47568e8fc63d699195238e87b19 WatchSource:0}: Error finding container 9b4b57f99fd475671ebee262a01baa244b5ca47568e8fc63d699195238e87b19: Status 404 returned error can't find the container with id 9b4b57f99fd475671ebee262a01baa244b5ca47568e8fc63d699195238e87b19 Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.316106 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sjjkg"] Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.317100 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sjjkg" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.319211 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.331795 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sjjkg"] Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.360642 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-operator-scripts\") pod \"root-account-create-update-sjjkg\" (UID: \"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8\") " pod="openstack/root-account-create-update-sjjkg" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.360872 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lltbh\" (UniqueName: \"kubernetes.io/projected/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-kube-api-access-lltbh\") pod \"root-account-create-update-sjjkg\" (UID: \"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8\") " pod="openstack/root-account-create-update-sjjkg" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.461986 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lltbh\" (UniqueName: \"kubernetes.io/projected/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-kube-api-access-lltbh\") pod \"root-account-create-update-sjjkg\" (UID: \"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8\") " pod="openstack/root-account-create-update-sjjkg" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.462041 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-operator-scripts\") pod \"root-account-create-update-sjjkg\" (UID: \"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8\") " pod="openstack/root-account-create-update-sjjkg" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.462821 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-operator-scripts\") pod \"root-account-create-update-sjjkg\" (UID: \"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8\") " pod="openstack/root-account-create-update-sjjkg" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.481612 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lltbh\" (UniqueName: \"kubernetes.io/projected/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-kube-api-access-lltbh\") pod \"root-account-create-update-sjjkg\" (UID: \"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8\") " pod="openstack/root-account-create-update-sjjkg" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.498363 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.498491 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.637860 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sjjkg" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.729689 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p2m8p" event={"ID":"46126eda-f691-4339-966c-615190176dea","Type":"ContainerStarted","Data":"9b4b57f99fd475671ebee262a01baa244b5ca47568e8fc63d699195238e87b19"} Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.729737 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-q8vmz" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.797604 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" podUID="91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.110:5353: connect: connection refused" Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.835060 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-q8vmz"] Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.841120 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-q8vmz"] Feb 19 13:03:57 crc kubenswrapper[4833]: I0219 13:03:57.933004 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:03:58 crc kubenswrapper[4833]: I0219 13:03:58.094400 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sjjkg"] Feb 19 13:03:58 crc kubenswrapper[4833]: I0219 13:03:58.323291 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7c5f9c7-2c80-4685-912c-d3660f7306ef" path="/var/lib/kubelet/pods/b7c5f9c7-2c80-4685-912c-d3660f7306ef/volumes" Feb 19 13:03:58 crc kubenswrapper[4833]: I0219 13:03:58.749792 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sjjkg" event={"ID":"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8","Type":"ContainerStarted","Data":"48ea46de45d5fde06ef60cfef1d20cba996cc3c5fae4664033388b67cc56baa9"} Feb 19 13:04:00 crc kubenswrapper[4833]: I0219 13:04:00.014820 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:04:00 crc kubenswrapper[4833]: E0219 13:04:00.015083 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 13:04:00 crc kubenswrapper[4833]: E0219 13:04:00.015128 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 13:04:00 crc kubenswrapper[4833]: E0219 13:04:00.015221 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift podName:0dfc7a49-4c64-4c4c-b0a9-eea1d8734612 nodeName:}" failed. No retries permitted until 2026-02-19 13:04:08.015192553 +0000 UTC m=+1058.410711331 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift") pod "swift-storage-0" (UID: "0dfc7a49-4c64-4c4c-b0a9-eea1d8734612") : configmap "swift-ring-files" not found Feb 19 13:04:00 crc kubenswrapper[4833]: I0219 13:04:00.735677 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 19 13:04:00 crc kubenswrapper[4833]: I0219 13:04:00.784628 4833 generic.go:334] "Generic (PLEG): container finished" podID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerID="9aa49bddb3daac39826569d79f66291b50c69594938b275b9fc365c1fd6f2944" exitCode=0 Feb 19 13:04:00 crc kubenswrapper[4833]: I0219 13:04:00.784690 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" event={"ID":"5e56bcf5-cac7-4e98-a3b0-43430ecf891e","Type":"ContainerDied","Data":"9aa49bddb3daac39826569d79f66291b50c69594938b275b9fc365c1fd6f2944"} Feb 19 13:04:00 crc kubenswrapper[4833]: I0219 13:04:00.787108 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a356e13b-39de-4d0b-aa58-f2dc6d3179fb","Type":"ContainerStarted","Data":"1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f"} Feb 19 13:04:00 crc kubenswrapper[4833]: I0219 13:04:00.809929 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sjjkg" event={"ID":"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8","Type":"ContainerStarted","Data":"5b16bda53cfdd66206d760bc0b6f48b1ab7187858af29e37ee123f0d0f5ed21f"} Feb 19 13:04:00 crc kubenswrapper[4833]: I0219 13:04:00.849677 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 19 13:04:00 crc kubenswrapper[4833]: I0219 13:04:00.852155 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-sjjkg" podStartSLOduration=3.852131972 podStartE2EDuration="3.852131972s" podCreationTimestamp="2026-02-19 13:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:00.847099843 +0000 UTC m=+1051.242618611" watchObservedRunningTime="2026-02-19 13:04:00.852131972 +0000 UTC m=+1051.247650750" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.066578 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.243807 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-config\") pod \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.243923 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-ovsdbserver-sb\") pod \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.244009 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sknjk\" (UniqueName: \"kubernetes.io/projected/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-kube-api-access-sknjk\") pod \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.244066 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-dns-svc\") pod \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\" (UID: \"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408\") " Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.248701 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-kube-api-access-sknjk" (OuterVolumeSpecName: "kube-api-access-sknjk") pod "91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" (UID: "91fd3e8f-0f4e-4fa4-939f-e09f4f75c408"). InnerVolumeSpecName "kube-api-access-sknjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.290813 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-config" (OuterVolumeSpecName: "config") pod "91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" (UID: "91fd3e8f-0f4e-4fa4-939f-e09f4f75c408"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.296538 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" (UID: "91fd3e8f-0f4e-4fa4-939f-e09f4f75c408"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.301038 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" (UID: "91fd3e8f-0f4e-4fa4-939f-e09f4f75c408"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.345846 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.345874 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.345888 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sknjk\" (UniqueName: \"kubernetes.io/projected/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-kube-api-access-sknjk\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.345897 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.836240 4833 generic.go:334] "Generic (PLEG): container finished" podID="8611c57b-ad50-4bdf-a87b-b5d2dff20ac8" containerID="5b16bda53cfdd66206d760bc0b6f48b1ab7187858af29e37ee123f0d0f5ed21f" exitCode=0 Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.836306 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sjjkg" event={"ID":"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8","Type":"ContainerDied","Data":"5b16bda53cfdd66206d760bc0b6f48b1ab7187858af29e37ee123f0d0f5ed21f"} Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.838773 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" event={"ID":"91fd3e8f-0f4e-4fa4-939f-e09f4f75c408","Type":"ContainerDied","Data":"35fe27d074922a4ae27a2d5f7a9fd0ab15cf442c157077e84311d3336046881d"} Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.838799 4833 scope.go:117] "RemoveContainer" containerID="e589a42651d566d59f127f2d2da1097f03189d353e97434ee440973d05d307f4" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.838886 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-mvc4l" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.842651 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" event={"ID":"5e56bcf5-cac7-4e98-a3b0-43430ecf891e","Type":"ContainerStarted","Data":"f23405dc86fd18cbe13542d135453395cd350e7aa83fb011a01515eb8eb63398"} Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.842810 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.848742 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e175eae-03fe-4c4b-b5d2-96df10844449","Type":"ContainerStarted","Data":"51d3b9b41db63f66151363d9ddebc7ce122688c02406feb2a0d5058e3b03bb4f"} Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.885261 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-mvc4l"] Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.894307 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-mvc4l"] Feb 19 13:04:01 crc kubenswrapper[4833]: I0219 13:04:01.896648 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" podStartSLOduration=11.896628249 podStartE2EDuration="11.896628249s" podCreationTimestamp="2026-02-19 13:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:01.892173535 +0000 UTC m=+1052.287692303" watchObservedRunningTime="2026-02-19 13:04:01.896628249 +0000 UTC m=+1052.292147017" Feb 19 13:04:02 crc kubenswrapper[4833]: I0219 13:04:02.324929 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" path="/var/lib/kubelet/pods/91fd3e8f-0f4e-4fa4-939f-e09f4f75c408/volumes" Feb 19 13:04:03 crc kubenswrapper[4833]: I0219 13:04:03.691603 4833 scope.go:117] "RemoveContainer" containerID="95da9a4c804e17f53b7f8fc05e553ec407281005941e1879dd09152ec6c0594c" Feb 19 13:04:03 crc kubenswrapper[4833]: I0219 13:04:03.801026 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sjjkg" Feb 19 13:04:03 crc kubenswrapper[4833]: I0219 13:04:03.868344 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sjjkg" Feb 19 13:04:03 crc kubenswrapper[4833]: I0219 13:04:03.868332 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sjjkg" event={"ID":"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8","Type":"ContainerDied","Data":"48ea46de45d5fde06ef60cfef1d20cba996cc3c5fae4664033388b67cc56baa9"} Feb 19 13:04:03 crc kubenswrapper[4833]: I0219 13:04:03.868399 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48ea46de45d5fde06ef60cfef1d20cba996cc3c5fae4664033388b67cc56baa9" Feb 19 13:04:03 crc kubenswrapper[4833]: I0219 13:04:03.997097 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lltbh\" (UniqueName: \"kubernetes.io/projected/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-kube-api-access-lltbh\") pod \"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8\" (UID: \"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8\") " Feb 19 13:04:03 crc kubenswrapper[4833]: I0219 13:04:03.997199 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-operator-scripts\") pod \"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8\" (UID: \"8611c57b-ad50-4bdf-a87b-b5d2dff20ac8\") " Feb 19 13:04:03 crc kubenswrapper[4833]: I0219 13:04:03.997889 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8611c57b-ad50-4bdf-a87b-b5d2dff20ac8" (UID: "8611c57b-ad50-4bdf-a87b-b5d2dff20ac8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:04 crc kubenswrapper[4833]: I0219 13:04:04.002538 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-kube-api-access-lltbh" (OuterVolumeSpecName: "kube-api-access-lltbh") pod "8611c57b-ad50-4bdf-a87b-b5d2dff20ac8" (UID: "8611c57b-ad50-4bdf-a87b-b5d2dff20ac8"). InnerVolumeSpecName "kube-api-access-lltbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:04 crc kubenswrapper[4833]: I0219 13:04:04.099451 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lltbh\" (UniqueName: \"kubernetes.io/projected/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-kube-api-access-lltbh\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:04 crc kubenswrapper[4833]: I0219 13:04:04.099525 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:04 crc kubenswrapper[4833]: I0219 13:04:04.879288 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p2m8p" event={"ID":"46126eda-f691-4339-966c-615190176dea","Type":"ContainerStarted","Data":"774d92b2bc694577556911e4d9eb9fb733d0516342660f23fa14e7d456798b28"} Feb 19 13:04:04 crc kubenswrapper[4833]: I0219 13:04:04.881108 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1e175eae-03fe-4c4b-b5d2-96df10844449","Type":"ContainerStarted","Data":"e408cae9a1744f7c9aee7f5d5ac0861905d61a684cb19f48b1023acbafda5b8b"} Feb 19 13:04:04 crc kubenswrapper[4833]: I0219 13:04:04.881275 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 19 13:04:04 crc kubenswrapper[4833]: I0219 13:04:04.925025 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-p2m8p" podStartSLOduration=2.132931728 podStartE2EDuration="8.924997276s" podCreationTimestamp="2026-02-19 13:03:56 +0000 UTC" firstStartedPulling="2026-02-19 13:03:57.00067572 +0000 UTC m=+1047.396194488" lastFinishedPulling="2026-02-19 13:04:03.792741238 +0000 UTC m=+1054.188260036" observedRunningTime="2026-02-19 13:04:04.914787196 +0000 UTC m=+1055.310306034" watchObservedRunningTime="2026-02-19 13:04:04.924997276 +0000 UTC m=+1055.320516074" Feb 19 13:04:04 crc kubenswrapper[4833]: I0219 13:04:04.939379 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.345623635 podStartE2EDuration="13.939356763s" podCreationTimestamp="2026-02-19 13:03:51 +0000 UTC" firstStartedPulling="2026-02-19 13:03:51.894893174 +0000 UTC m=+1042.290411942" lastFinishedPulling="2026-02-19 13:04:01.488626292 +0000 UTC m=+1051.884145070" observedRunningTime="2026-02-19 13:04:04.938299096 +0000 UTC m=+1055.333817864" watchObservedRunningTime="2026-02-19 13:04:04.939356763 +0000 UTC m=+1055.334875551" Feb 19 13:04:05 crc kubenswrapper[4833]: I0219 13:04:05.892077 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sjjkg"] Feb 19 13:04:05 crc kubenswrapper[4833]: I0219 13:04:05.903101 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sjjkg"] Feb 19 13:04:05 crc kubenswrapper[4833]: I0219 13:04:05.960855 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-wrhrc"] Feb 19 13:04:05 crc kubenswrapper[4833]: E0219 13:04:05.961334 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" containerName="init" Feb 19 13:04:05 crc kubenswrapper[4833]: I0219 13:04:05.961362 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" containerName="init" Feb 19 13:04:05 crc kubenswrapper[4833]: E0219 13:04:05.961398 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8611c57b-ad50-4bdf-a87b-b5d2dff20ac8" containerName="mariadb-account-create-update" Feb 19 13:04:05 crc kubenswrapper[4833]: I0219 13:04:05.961409 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8611c57b-ad50-4bdf-a87b-b5d2dff20ac8" containerName="mariadb-account-create-update" Feb 19 13:04:05 crc kubenswrapper[4833]: E0219 13:04:05.961432 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" containerName="dnsmasq-dns" Feb 19 13:04:05 crc kubenswrapper[4833]: I0219 13:04:05.961443 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" containerName="dnsmasq-dns" Feb 19 13:04:05 crc kubenswrapper[4833]: I0219 13:04:05.961671 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8611c57b-ad50-4bdf-a87b-b5d2dff20ac8" containerName="mariadb-account-create-update" Feb 19 13:04:05 crc kubenswrapper[4833]: I0219 13:04:05.961697 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="91fd3e8f-0f4e-4fa4-939f-e09f4f75c408" containerName="dnsmasq-dns" Feb 19 13:04:05 crc kubenswrapper[4833]: I0219 13:04:05.962300 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrhrc" Feb 19 13:04:05 crc kubenswrapper[4833]: I0219 13:04:05.965646 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 19 13:04:05 crc kubenswrapper[4833]: I0219 13:04:05.979818 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wrhrc"] Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.134478 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4m5n\" (UniqueName: \"kubernetes.io/projected/664b0c3e-bb34-4357-a07b-418b3b890ec3-kube-api-access-f4m5n\") pod \"root-account-create-update-wrhrc\" (UID: \"664b0c3e-bb34-4357-a07b-418b3b890ec3\") " pod="openstack/root-account-create-update-wrhrc" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.134665 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/664b0c3e-bb34-4357-a07b-418b3b890ec3-operator-scripts\") pod \"root-account-create-update-wrhrc\" (UID: \"664b0c3e-bb34-4357-a07b-418b3b890ec3\") " pod="openstack/root-account-create-update-wrhrc" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.236352 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4m5n\" (UniqueName: \"kubernetes.io/projected/664b0c3e-bb34-4357-a07b-418b3b890ec3-kube-api-access-f4m5n\") pod \"root-account-create-update-wrhrc\" (UID: \"664b0c3e-bb34-4357-a07b-418b3b890ec3\") " pod="openstack/root-account-create-update-wrhrc" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.236460 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/664b0c3e-bb34-4357-a07b-418b3b890ec3-operator-scripts\") pod \"root-account-create-update-wrhrc\" (UID: \"664b0c3e-bb34-4357-a07b-418b3b890ec3\") " pod="openstack/root-account-create-update-wrhrc" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.237185 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/664b0c3e-bb34-4357-a07b-418b3b890ec3-operator-scripts\") pod \"root-account-create-update-wrhrc\" (UID: \"664b0c3e-bb34-4357-a07b-418b3b890ec3\") " pod="openstack/root-account-create-update-wrhrc" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.259287 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4m5n\" (UniqueName: \"kubernetes.io/projected/664b0c3e-bb34-4357-a07b-418b3b890ec3-kube-api-access-f4m5n\") pod \"root-account-create-update-wrhrc\" (UID: \"664b0c3e-bb34-4357-a07b-418b3b890ec3\") " pod="openstack/root-account-create-update-wrhrc" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.280096 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrhrc" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.337153 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8611c57b-ad50-4bdf-a87b-b5d2dff20ac8" path="/var/lib/kubelet/pods/8611c57b-ad50-4bdf-a87b-b5d2dff20ac8/volumes" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.359756 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.441478 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fcs9l"] Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.441985 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-fcs9l" podUID="36e91fa6-1254-4adb-afeb-736f39dc7e88" containerName="dnsmasq-dns" containerID="cri-o://f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e" gracePeriod=10 Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.790142 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-wrhrc"] Feb 19 13:04:06 crc kubenswrapper[4833]: W0219 13:04:06.794359 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod664b0c3e_bb34_4357_a07b_418b3b890ec3.slice/crio-3c99239d7d62d6980904be01ae01dcdf1fe7b2936eba07c3ef246155b01c1f09 WatchSource:0}: Error finding container 3c99239d7d62d6980904be01ae01dcdf1fe7b2936eba07c3ef246155b01c1f09: Status 404 returned error can't find the container with id 3c99239d7d62d6980904be01ae01dcdf1fe7b2936eba07c3ef246155b01c1f09 Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.884600 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.907897 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wrhrc" event={"ID":"664b0c3e-bb34-4357-a07b-418b3b890ec3","Type":"ContainerStarted","Data":"3c99239d7d62d6980904be01ae01dcdf1fe7b2936eba07c3ef246155b01c1f09"} Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.911341 4833 generic.go:334] "Generic (PLEG): container finished" podID="36e91fa6-1254-4adb-afeb-736f39dc7e88" containerID="f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e" exitCode=0 Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.911392 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fcs9l" event={"ID":"36e91fa6-1254-4adb-afeb-736f39dc7e88","Type":"ContainerDied","Data":"f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e"} Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.911434 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fcs9l" event={"ID":"36e91fa6-1254-4adb-afeb-736f39dc7e88","Type":"ContainerDied","Data":"ebdf422d560b9155ef98fc748ec85a7cb6c3262b3e1b90eff685fbeeff34486a"} Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.911455 4833 scope.go:117] "RemoveContainer" containerID="f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.911632 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fcs9l" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.937127 4833 scope.go:117] "RemoveContainer" containerID="d57efa3b2d63c7cb77379426365006be7ad44302b2d0986101da3504ae17e998" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.951713 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsdcq\" (UniqueName: \"kubernetes.io/projected/36e91fa6-1254-4adb-afeb-736f39dc7e88-kube-api-access-xsdcq\") pod \"36e91fa6-1254-4adb-afeb-736f39dc7e88\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.951765 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-nb\") pod \"36e91fa6-1254-4adb-afeb-736f39dc7e88\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.951846 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-config\") pod \"36e91fa6-1254-4adb-afeb-736f39dc7e88\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.951973 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-sb\") pod \"36e91fa6-1254-4adb-afeb-736f39dc7e88\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.952065 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-dns-svc\") pod \"36e91fa6-1254-4adb-afeb-736f39dc7e88\" (UID: \"36e91fa6-1254-4adb-afeb-736f39dc7e88\") " Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.957929 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36e91fa6-1254-4adb-afeb-736f39dc7e88-kube-api-access-xsdcq" (OuterVolumeSpecName: "kube-api-access-xsdcq") pod "36e91fa6-1254-4adb-afeb-736f39dc7e88" (UID: "36e91fa6-1254-4adb-afeb-736f39dc7e88"). InnerVolumeSpecName "kube-api-access-xsdcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.961776 4833 scope.go:117] "RemoveContainer" containerID="f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e" Feb 19 13:04:06 crc kubenswrapper[4833]: E0219 13:04:06.962148 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e\": container with ID starting with f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e not found: ID does not exist" containerID="f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.962178 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e"} err="failed to get container status \"f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e\": rpc error: code = NotFound desc = could not find container \"f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e\": container with ID starting with f985028a35a283f5cb56ceb261aa3874cdd9f26f1e67cb031a8731d7a1d3295e not found: ID does not exist" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.962199 4833 scope.go:117] "RemoveContainer" containerID="d57efa3b2d63c7cb77379426365006be7ad44302b2d0986101da3504ae17e998" Feb 19 13:04:06 crc kubenswrapper[4833]: E0219 13:04:06.962559 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d57efa3b2d63c7cb77379426365006be7ad44302b2d0986101da3504ae17e998\": container with ID starting with d57efa3b2d63c7cb77379426365006be7ad44302b2d0986101da3504ae17e998 not found: ID does not exist" containerID="d57efa3b2d63c7cb77379426365006be7ad44302b2d0986101da3504ae17e998" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.962580 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d57efa3b2d63c7cb77379426365006be7ad44302b2d0986101da3504ae17e998"} err="failed to get container status \"d57efa3b2d63c7cb77379426365006be7ad44302b2d0986101da3504ae17e998\": rpc error: code = NotFound desc = could not find container \"d57efa3b2d63c7cb77379426365006be7ad44302b2d0986101da3504ae17e998\": container with ID starting with d57efa3b2d63c7cb77379426365006be7ad44302b2d0986101da3504ae17e998 not found: ID does not exist" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.989144 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36e91fa6-1254-4adb-afeb-736f39dc7e88" (UID: "36e91fa6-1254-4adb-afeb-736f39dc7e88"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.992261 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36e91fa6-1254-4adb-afeb-736f39dc7e88" (UID: "36e91fa6-1254-4adb-afeb-736f39dc7e88"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.994406 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36e91fa6-1254-4adb-afeb-736f39dc7e88" (UID: "36e91fa6-1254-4adb-afeb-736f39dc7e88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:06 crc kubenswrapper[4833]: I0219 13:04:06.995605 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-config" (OuterVolumeSpecName: "config") pod "36e91fa6-1254-4adb-afeb-736f39dc7e88" (UID: "36e91fa6-1254-4adb-afeb-736f39dc7e88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:07 crc kubenswrapper[4833]: I0219 13:04:07.055719 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:07 crc kubenswrapper[4833]: I0219 13:04:07.055784 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsdcq\" (UniqueName: \"kubernetes.io/projected/36e91fa6-1254-4adb-afeb-736f39dc7e88-kube-api-access-xsdcq\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:07 crc kubenswrapper[4833]: I0219 13:04:07.055805 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:07 crc kubenswrapper[4833]: I0219 13:04:07.055813 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:07 crc kubenswrapper[4833]: I0219 13:04:07.055942 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36e91fa6-1254-4adb-afeb-736f39dc7e88-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:07 crc kubenswrapper[4833]: I0219 13:04:07.267570 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fcs9l"] Feb 19 13:04:07 crc kubenswrapper[4833]: I0219 13:04:07.278714 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fcs9l"] Feb 19 13:04:07 crc kubenswrapper[4833]: I0219 13:04:07.919895 4833 generic.go:334] "Generic (PLEG): container finished" podID="664b0c3e-bb34-4357-a07b-418b3b890ec3" containerID="5b49de954ade5270b73330611daee6471d79f4fd53683cf220e957c41b76dfae" exitCode=0 Feb 19 13:04:07 crc kubenswrapper[4833]: I0219 13:04:07.919980 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wrhrc" event={"ID":"664b0c3e-bb34-4357-a07b-418b3b890ec3","Type":"ContainerDied","Data":"5b49de954ade5270b73330611daee6471d79f4fd53683cf220e957c41b76dfae"} Feb 19 13:04:08 crc kubenswrapper[4833]: I0219 13:04:08.070597 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:04:08 crc kubenswrapper[4833]: E0219 13:04:08.070855 4833 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 19 13:04:08 crc kubenswrapper[4833]: E0219 13:04:08.070872 4833 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 19 13:04:08 crc kubenswrapper[4833]: E0219 13:04:08.070922 4833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift podName:0dfc7a49-4c64-4c4c-b0a9-eea1d8734612 nodeName:}" failed. No retries permitted until 2026-02-19 13:04:24.070903644 +0000 UTC m=+1074.466422412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift") pod "swift-storage-0" (UID: "0dfc7a49-4c64-4c4c-b0a9-eea1d8734612") : configmap "swift-ring-files" not found Feb 19 13:04:08 crc kubenswrapper[4833]: I0219 13:04:08.342285 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36e91fa6-1254-4adb-afeb-736f39dc7e88" path="/var/lib/kubelet/pods/36e91fa6-1254-4adb-afeb-736f39dc7e88/volumes" Feb 19 13:04:08 crc kubenswrapper[4833]: I0219 13:04:08.901263 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-svb9r"] Feb 19 13:04:08 crc kubenswrapper[4833]: E0219 13:04:08.901961 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e91fa6-1254-4adb-afeb-736f39dc7e88" containerName="dnsmasq-dns" Feb 19 13:04:08 crc kubenswrapper[4833]: I0219 13:04:08.902006 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e91fa6-1254-4adb-afeb-736f39dc7e88" containerName="dnsmasq-dns" Feb 19 13:04:08 crc kubenswrapper[4833]: E0219 13:04:08.902054 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36e91fa6-1254-4adb-afeb-736f39dc7e88" containerName="init" Feb 19 13:04:08 crc kubenswrapper[4833]: I0219 13:04:08.902073 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="36e91fa6-1254-4adb-afeb-736f39dc7e88" containerName="init" Feb 19 13:04:08 crc kubenswrapper[4833]: I0219 13:04:08.902550 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="36e91fa6-1254-4adb-afeb-736f39dc7e88" containerName="dnsmasq-dns" Feb 19 13:04:08 crc kubenswrapper[4833]: I0219 13:04:08.903679 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-svb9r" Feb 19 13:04:08 crc kubenswrapper[4833]: I0219 13:04:08.928408 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-svb9r"] Feb 19 13:04:08 crc kubenswrapper[4833]: I0219 13:04:08.987963 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-987b-account-create-update-qf697"] Feb 19 13:04:08 crc kubenswrapper[4833]: I0219 13:04:08.989338 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-987b-account-create-update-qf697" Feb 19 13:04:08 crc kubenswrapper[4833]: I0219 13:04:08.992177 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.000567 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-987b-account-create-update-qf697"] Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.087546 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07c210e9-a412-4019-b8ae-2bf67642661a-operator-scripts\") pod \"glance-db-create-svb9r\" (UID: \"07c210e9-a412-4019-b8ae-2bf67642661a\") " pod="openstack/glance-db-create-svb9r" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.087633 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9j2l\" (UniqueName: \"kubernetes.io/projected/07c210e9-a412-4019-b8ae-2bf67642661a-kube-api-access-v9j2l\") pod \"glance-db-create-svb9r\" (UID: \"07c210e9-a412-4019-b8ae-2bf67642661a\") " pod="openstack/glance-db-create-svb9r" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.087698 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r26n\" (UniqueName: \"kubernetes.io/projected/c0fb7215-a577-4756-8363-a4cba291a804-kube-api-access-6r26n\") pod \"glance-987b-account-create-update-qf697\" (UID: \"c0fb7215-a577-4756-8363-a4cba291a804\") " pod="openstack/glance-987b-account-create-update-qf697" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.087857 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0fb7215-a577-4756-8363-a4cba291a804-operator-scripts\") pod \"glance-987b-account-create-update-qf697\" (UID: \"c0fb7215-a577-4756-8363-a4cba291a804\") " pod="openstack/glance-987b-account-create-update-qf697" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.189711 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r26n\" (UniqueName: \"kubernetes.io/projected/c0fb7215-a577-4756-8363-a4cba291a804-kube-api-access-6r26n\") pod \"glance-987b-account-create-update-qf697\" (UID: \"c0fb7215-a577-4756-8363-a4cba291a804\") " pod="openstack/glance-987b-account-create-update-qf697" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.190215 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0fb7215-a577-4756-8363-a4cba291a804-operator-scripts\") pod \"glance-987b-account-create-update-qf697\" (UID: \"c0fb7215-a577-4756-8363-a4cba291a804\") " pod="openstack/glance-987b-account-create-update-qf697" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.190463 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07c210e9-a412-4019-b8ae-2bf67642661a-operator-scripts\") pod \"glance-db-create-svb9r\" (UID: \"07c210e9-a412-4019-b8ae-2bf67642661a\") " pod="openstack/glance-db-create-svb9r" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.190531 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9j2l\" (UniqueName: \"kubernetes.io/projected/07c210e9-a412-4019-b8ae-2bf67642661a-kube-api-access-v9j2l\") pod \"glance-db-create-svb9r\" (UID: \"07c210e9-a412-4019-b8ae-2bf67642661a\") " pod="openstack/glance-db-create-svb9r" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.191152 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0fb7215-a577-4756-8363-a4cba291a804-operator-scripts\") pod \"glance-987b-account-create-update-qf697\" (UID: \"c0fb7215-a577-4756-8363-a4cba291a804\") " pod="openstack/glance-987b-account-create-update-qf697" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.192434 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07c210e9-a412-4019-b8ae-2bf67642661a-operator-scripts\") pod \"glance-db-create-svb9r\" (UID: \"07c210e9-a412-4019-b8ae-2bf67642661a\") " pod="openstack/glance-db-create-svb9r" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.217285 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r26n\" (UniqueName: \"kubernetes.io/projected/c0fb7215-a577-4756-8363-a4cba291a804-kube-api-access-6r26n\") pod \"glance-987b-account-create-update-qf697\" (UID: \"c0fb7215-a577-4756-8363-a4cba291a804\") " pod="openstack/glance-987b-account-create-update-qf697" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.219305 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9j2l\" (UniqueName: \"kubernetes.io/projected/07c210e9-a412-4019-b8ae-2bf67642661a-kube-api-access-v9j2l\") pod \"glance-db-create-svb9r\" (UID: \"07c210e9-a412-4019-b8ae-2bf67642661a\") " pod="openstack/glance-db-create-svb9r" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.234221 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-svb9r" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.307136 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-987b-account-create-update-qf697" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.331588 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrhrc" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.495373 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4m5n\" (UniqueName: \"kubernetes.io/projected/664b0c3e-bb34-4357-a07b-418b3b890ec3-kube-api-access-f4m5n\") pod \"664b0c3e-bb34-4357-a07b-418b3b890ec3\" (UID: \"664b0c3e-bb34-4357-a07b-418b3b890ec3\") " Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.495441 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/664b0c3e-bb34-4357-a07b-418b3b890ec3-operator-scripts\") pod \"664b0c3e-bb34-4357-a07b-418b3b890ec3\" (UID: \"664b0c3e-bb34-4357-a07b-418b3b890ec3\") " Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.496268 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664b0c3e-bb34-4357-a07b-418b3b890ec3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "664b0c3e-bb34-4357-a07b-418b3b890ec3" (UID: "664b0c3e-bb34-4357-a07b-418b3b890ec3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.500875 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/664b0c3e-bb34-4357-a07b-418b3b890ec3-kube-api-access-f4m5n" (OuterVolumeSpecName: "kube-api-access-f4m5n") pod "664b0c3e-bb34-4357-a07b-418b3b890ec3" (UID: "664b0c3e-bb34-4357-a07b-418b3b890ec3"). InnerVolumeSpecName "kube-api-access-f4m5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.597178 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/664b0c3e-bb34-4357-a07b-418b3b890ec3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.597213 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4m5n\" (UniqueName: \"kubernetes.io/projected/664b0c3e-bb34-4357-a07b-418b3b890ec3-kube-api-access-f4m5n\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:09 crc kubenswrapper[4833]: W0219 13:04:09.704304 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07c210e9_a412_4019_b8ae_2bf67642661a.slice/crio-84e5c5c329dad72bf2d53740a13bc65e49e5f9ebc39d46952910ef68cfdb8307 WatchSource:0}: Error finding container 84e5c5c329dad72bf2d53740a13bc65e49e5f9ebc39d46952910ef68cfdb8307: Status 404 returned error can't find the container with id 84e5c5c329dad72bf2d53740a13bc65e49e5f9ebc39d46952910ef68cfdb8307 Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.704599 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-svb9r"] Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.880113 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-s67c7"] Feb 19 13:04:09 crc kubenswrapper[4833]: E0219 13:04:09.880656 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664b0c3e-bb34-4357-a07b-418b3b890ec3" containerName="mariadb-account-create-update" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.880685 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="664b0c3e-bb34-4357-a07b-418b3b890ec3" containerName="mariadb-account-create-update" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.881032 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="664b0c3e-bb34-4357-a07b-418b3b890ec3" containerName="mariadb-account-create-update" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.881838 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s67c7" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.889144 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-987b-account-create-update-qf697"] Feb 19 13:04:09 crc kubenswrapper[4833]: W0219 13:04:09.890155 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0fb7215_a577_4756_8363_a4cba291a804.slice/crio-9f4e15b9d9b6282490e216cfc7a7989241ec71226b7296e700f0c3a8e5ed826a WatchSource:0}: Error finding container 9f4e15b9d9b6282490e216cfc7a7989241ec71226b7296e700f0c3a8e5ed826a: Status 404 returned error can't find the container with id 9f4e15b9d9b6282490e216cfc7a7989241ec71226b7296e700f0c3a8e5ed826a Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.895948 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s67c7"] Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.904389 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r7x6\" (UniqueName: \"kubernetes.io/projected/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-kube-api-access-6r7x6\") pod \"keystone-db-create-s67c7\" (UID: \"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0\") " pod="openstack/keystone-db-create-s67c7" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.904774 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-operator-scripts\") pod \"keystone-db-create-s67c7\" (UID: \"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0\") " pod="openstack/keystone-db-create-s67c7" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.940599 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-wrhrc" event={"ID":"664b0c3e-bb34-4357-a07b-418b3b890ec3","Type":"ContainerDied","Data":"3c99239d7d62d6980904be01ae01dcdf1fe7b2936eba07c3ef246155b01c1f09"} Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.940672 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c99239d7d62d6980904be01ae01dcdf1fe7b2936eba07c3ef246155b01c1f09" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.940631 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-wrhrc" Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.963037 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-svb9r" event={"ID":"07c210e9-a412-4019-b8ae-2bf67642661a","Type":"ContainerStarted","Data":"864e10b4af564a6797f4bc901076c885ca8781850292e5abd34b0daf2fc0f467"} Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.963647 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-svb9r" event={"ID":"07c210e9-a412-4019-b8ae-2bf67642661a","Type":"ContainerStarted","Data":"84e5c5c329dad72bf2d53740a13bc65e49e5f9ebc39d46952910ef68cfdb8307"} Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.972757 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-987b-account-create-update-qf697" event={"ID":"c0fb7215-a577-4756-8363-a4cba291a804","Type":"ContainerStarted","Data":"9f4e15b9d9b6282490e216cfc7a7989241ec71226b7296e700f0c3a8e5ed826a"} Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.991158 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0e4e-account-create-update-stpgm"] Feb 19 13:04:09 crc kubenswrapper[4833]: I0219 13:04:09.992664 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e4e-account-create-update-stpgm" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.010256 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r7x6\" (UniqueName: \"kubernetes.io/projected/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-kube-api-access-6r7x6\") pod \"keystone-db-create-s67c7\" (UID: \"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0\") " pod="openstack/keystone-db-create-s67c7" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.010349 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-operator-scripts\") pod \"keystone-db-create-s67c7\" (UID: \"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0\") " pod="openstack/keystone-db-create-s67c7" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.011213 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-operator-scripts\") pod \"keystone-db-create-s67c7\" (UID: \"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0\") " pod="openstack/keystone-db-create-s67c7" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.014198 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.032596 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r7x6\" (UniqueName: \"kubernetes.io/projected/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-kube-api-access-6r7x6\") pod \"keystone-db-create-s67c7\" (UID: \"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0\") " pod="openstack/keystone-db-create-s67c7" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.048752 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0e4e-account-create-update-stpgm"] Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.064000 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-svb9r" podStartSLOduration=2.063977439 podStartE2EDuration="2.063977439s" podCreationTimestamp="2026-02-19 13:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:10.014450846 +0000 UTC m=+1060.409969604" watchObservedRunningTime="2026-02-19 13:04:10.063977439 +0000 UTC m=+1060.459496207" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.085991 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-scjhz"] Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.087548 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-scjhz" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.093678 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-scjhz"] Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.111270 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-operator-scripts\") pod \"keystone-0e4e-account-create-update-stpgm\" (UID: \"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a\") " pod="openstack/keystone-0e4e-account-create-update-stpgm" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.111313 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pll8h\" (UniqueName: \"kubernetes.io/projected/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-kube-api-access-pll8h\") pod \"keystone-0e4e-account-create-update-stpgm\" (UID: \"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a\") " pod="openstack/keystone-0e4e-account-create-update-stpgm" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.178920 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78fb-account-create-update-q65b4"] Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.180667 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78fb-account-create-update-q65b4" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.182047 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.187787 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78fb-account-create-update-q65b4"] Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.212911 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d6b7eb-aa56-45d5-84c2-97d3732916db-operator-scripts\") pod \"placement-db-create-scjhz\" (UID: \"22d6b7eb-aa56-45d5-84c2-97d3732916db\") " pod="openstack/placement-db-create-scjhz" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.212966 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqn8l\" (UniqueName: \"kubernetes.io/projected/22d6b7eb-aa56-45d5-84c2-97d3732916db-kube-api-access-pqn8l\") pod \"placement-db-create-scjhz\" (UID: \"22d6b7eb-aa56-45d5-84c2-97d3732916db\") " pod="openstack/placement-db-create-scjhz" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.213000 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-operator-scripts\") pod \"keystone-0e4e-account-create-update-stpgm\" (UID: \"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a\") " pod="openstack/keystone-0e4e-account-create-update-stpgm" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.213018 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pll8h\" (UniqueName: \"kubernetes.io/projected/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-kube-api-access-pll8h\") pod \"keystone-0e4e-account-create-update-stpgm\" (UID: \"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a\") " pod="openstack/keystone-0e4e-account-create-update-stpgm" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.214075 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-operator-scripts\") pod \"keystone-0e4e-account-create-update-stpgm\" (UID: \"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a\") " pod="openstack/keystone-0e4e-account-create-update-stpgm" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.216617 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s67c7" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.236469 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pll8h\" (UniqueName: \"kubernetes.io/projected/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-kube-api-access-pll8h\") pod \"keystone-0e4e-account-create-update-stpgm\" (UID: \"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a\") " pod="openstack/keystone-0e4e-account-create-update-stpgm" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.321073 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gprbw\" (UniqueName: \"kubernetes.io/projected/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-kube-api-access-gprbw\") pod \"placement-78fb-account-create-update-q65b4\" (UID: \"78f84e8d-9afd-4bdc-9dde-cb633b5d7083\") " pod="openstack/placement-78fb-account-create-update-q65b4" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.321147 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-operator-scripts\") pod \"placement-78fb-account-create-update-q65b4\" (UID: \"78f84e8d-9afd-4bdc-9dde-cb633b5d7083\") " pod="openstack/placement-78fb-account-create-update-q65b4" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.321206 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d6b7eb-aa56-45d5-84c2-97d3732916db-operator-scripts\") pod \"placement-db-create-scjhz\" (UID: \"22d6b7eb-aa56-45d5-84c2-97d3732916db\") " pod="openstack/placement-db-create-scjhz" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.321236 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqn8l\" (UniqueName: \"kubernetes.io/projected/22d6b7eb-aa56-45d5-84c2-97d3732916db-kube-api-access-pqn8l\") pod \"placement-db-create-scjhz\" (UID: \"22d6b7eb-aa56-45d5-84c2-97d3732916db\") " pod="openstack/placement-db-create-scjhz" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.322900 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d6b7eb-aa56-45d5-84c2-97d3732916db-operator-scripts\") pod \"placement-db-create-scjhz\" (UID: \"22d6b7eb-aa56-45d5-84c2-97d3732916db\") " pod="openstack/placement-db-create-scjhz" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.334704 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e4e-account-create-update-stpgm" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.348215 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqn8l\" (UniqueName: \"kubernetes.io/projected/22d6b7eb-aa56-45d5-84c2-97d3732916db-kube-api-access-pqn8l\") pod \"placement-db-create-scjhz\" (UID: \"22d6b7eb-aa56-45d5-84c2-97d3732916db\") " pod="openstack/placement-db-create-scjhz" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.417880 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-scjhz" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.425351 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gprbw\" (UniqueName: \"kubernetes.io/projected/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-kube-api-access-gprbw\") pod \"placement-78fb-account-create-update-q65b4\" (UID: \"78f84e8d-9afd-4bdc-9dde-cb633b5d7083\") " pod="openstack/placement-78fb-account-create-update-q65b4" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.425412 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-operator-scripts\") pod \"placement-78fb-account-create-update-q65b4\" (UID: \"78f84e8d-9afd-4bdc-9dde-cb633b5d7083\") " pod="openstack/placement-78fb-account-create-update-q65b4" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.426254 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-operator-scripts\") pod \"placement-78fb-account-create-update-q65b4\" (UID: \"78f84e8d-9afd-4bdc-9dde-cb633b5d7083\") " pod="openstack/placement-78fb-account-create-update-q65b4" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.441732 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gprbw\" (UniqueName: \"kubernetes.io/projected/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-kube-api-access-gprbw\") pod \"placement-78fb-account-create-update-q65b4\" (UID: \"78f84e8d-9afd-4bdc-9dde-cb633b5d7083\") " pod="openstack/placement-78fb-account-create-update-q65b4" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.597220 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78fb-account-create-update-q65b4" Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.635989 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0e4e-account-create-update-stpgm"] Feb 19 13:04:10 crc kubenswrapper[4833]: W0219 13:04:10.639894 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09b26dd3_c5dd_447f_a2ef_6d92e5ad231a.slice/crio-3787dd1acd06efd6e21c74739b63226285d3e74c09257a3466e904b3a02eb9ca WatchSource:0}: Error finding container 3787dd1acd06efd6e21c74739b63226285d3e74c09257a3466e904b3a02eb9ca: Status 404 returned error can't find the container with id 3787dd1acd06efd6e21c74739b63226285d3e74c09257a3466e904b3a02eb9ca Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.701702 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-s67c7"] Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.736384 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-scjhz"] Feb 19 13:04:10 crc kubenswrapper[4833]: W0219 13:04:10.749144 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22d6b7eb_aa56_45d5_84c2_97d3732916db.slice/crio-30379b7f71821544b15a5b1b474262de90d96bf357364bc8baf5d05e746ea7fc WatchSource:0}: Error finding container 30379b7f71821544b15a5b1b474262de90d96bf357364bc8baf5d05e746ea7fc: Status 404 returned error can't find the container with id 30379b7f71821544b15a5b1b474262de90d96bf357364bc8baf5d05e746ea7fc Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.981342 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-scjhz" event={"ID":"22d6b7eb-aa56-45d5-84c2-97d3732916db","Type":"ContainerStarted","Data":"a221190702abb47c9dd475c526548391c9a956acbb95645d602f01a183d38dfd"} Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.981397 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-scjhz" event={"ID":"22d6b7eb-aa56-45d5-84c2-97d3732916db","Type":"ContainerStarted","Data":"30379b7f71821544b15a5b1b474262de90d96bf357364bc8baf5d05e746ea7fc"} Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.983113 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s67c7" event={"ID":"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0","Type":"ContainerStarted","Data":"7fc32086bbe804cfa85dc737aee3b51c6ed822be6ce41ae28d21ccb9943a040d"} Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.983158 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s67c7" event={"ID":"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0","Type":"ContainerStarted","Data":"6cc06c969f685d22ffacda62f13d3ef4e2e05bdfb2e332ff9da402540dd6348d"} Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.984653 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0e4e-account-create-update-stpgm" event={"ID":"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a","Type":"ContainerStarted","Data":"cc7917898ca7e33ec77ecc5ba15b4c516f0c66d52bedcbbd0275b3eae18c39d0"} Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.984778 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0e4e-account-create-update-stpgm" event={"ID":"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a","Type":"ContainerStarted","Data":"3787dd1acd06efd6e21c74739b63226285d3e74c09257a3466e904b3a02eb9ca"} Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.987465 4833 generic.go:334] "Generic (PLEG): container finished" podID="07c210e9-a412-4019-b8ae-2bf67642661a" containerID="864e10b4af564a6797f4bc901076c885ca8781850292e5abd34b0daf2fc0f467" exitCode=0 Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.987559 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-svb9r" event={"ID":"07c210e9-a412-4019-b8ae-2bf67642661a","Type":"ContainerDied","Data":"864e10b4af564a6797f4bc901076c885ca8781850292e5abd34b0daf2fc0f467"} Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.989856 4833 generic.go:334] "Generic (PLEG): container finished" podID="46126eda-f691-4339-966c-615190176dea" containerID="774d92b2bc694577556911e4d9eb9fb733d0516342660f23fa14e7d456798b28" exitCode=0 Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.990018 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p2m8p" event={"ID":"46126eda-f691-4339-966c-615190176dea","Type":"ContainerDied","Data":"774d92b2bc694577556911e4d9eb9fb733d0516342660f23fa14e7d456798b28"} Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.992700 4833 generic.go:334] "Generic (PLEG): container finished" podID="c0fb7215-a577-4756-8363-a4cba291a804" containerID="19c825afeba88a6065f345e0ad6851bf0880eee2f9c49afde8667fa659ddcf05" exitCode=0 Feb 19 13:04:10 crc kubenswrapper[4833]: I0219 13:04:10.992818 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-987b-account-create-update-qf697" event={"ID":"c0fb7215-a577-4756-8363-a4cba291a804","Type":"ContainerDied","Data":"19c825afeba88a6065f345e0ad6851bf0880eee2f9c49afde8667fa659ddcf05"} Feb 19 13:04:11 crc kubenswrapper[4833]: I0219 13:04:11.000778 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-scjhz" podStartSLOduration=1.000759466 podStartE2EDuration="1.000759466s" podCreationTimestamp="2026-02-19 13:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:10.995361479 +0000 UTC m=+1061.390880257" watchObservedRunningTime="2026-02-19 13:04:11.000759466 +0000 UTC m=+1061.396278234" Feb 19 13:04:11 crc kubenswrapper[4833]: I0219 13:04:11.052773 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78fb-account-create-update-q65b4"] Feb 19 13:04:11 crc kubenswrapper[4833]: I0219 13:04:11.061598 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-0e4e-account-create-update-stpgm" podStartSLOduration=2.061565469 podStartE2EDuration="2.061565469s" podCreationTimestamp="2026-02-19 13:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:11.054658683 +0000 UTC m=+1061.450177451" watchObservedRunningTime="2026-02-19 13:04:11.061565469 +0000 UTC m=+1061.457084237" Feb 19 13:04:11 crc kubenswrapper[4833]: I0219 13:04:11.078424 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-s67c7" podStartSLOduration=2.078403329 podStartE2EDuration="2.078403329s" podCreationTimestamp="2026-02-19 13:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:11.071858112 +0000 UTC m=+1061.467376880" watchObservedRunningTime="2026-02-19 13:04:11.078403329 +0000 UTC m=+1061.473922117" Feb 19 13:04:11 crc kubenswrapper[4833]: I0219 13:04:11.484289 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.005295 4833 generic.go:334] "Generic (PLEG): container finished" podID="22d6b7eb-aa56-45d5-84c2-97d3732916db" containerID="a221190702abb47c9dd475c526548391c9a956acbb95645d602f01a183d38dfd" exitCode=0 Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.005765 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-scjhz" event={"ID":"22d6b7eb-aa56-45d5-84c2-97d3732916db","Type":"ContainerDied","Data":"a221190702abb47c9dd475c526548391c9a956acbb95645d602f01a183d38dfd"} Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.008096 4833 generic.go:334] "Generic (PLEG): container finished" podID="a52037a8-0c1c-4c36-b1ca-f90b67f83ff0" containerID="7fc32086bbe804cfa85dc737aee3b51c6ed822be6ce41ae28d21ccb9943a040d" exitCode=0 Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.008165 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s67c7" event={"ID":"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0","Type":"ContainerDied","Data":"7fc32086bbe804cfa85dc737aee3b51c6ed822be6ce41ae28d21ccb9943a040d"} Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.020291 4833 generic.go:334] "Generic (PLEG): container finished" podID="09b26dd3-c5dd-447f-a2ef-6d92e5ad231a" containerID="cc7917898ca7e33ec77ecc5ba15b4c516f0c66d52bedcbbd0275b3eae18c39d0" exitCode=0 Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.020385 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0e4e-account-create-update-stpgm" event={"ID":"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a","Type":"ContainerDied","Data":"cc7917898ca7e33ec77ecc5ba15b4c516f0c66d52bedcbbd0275b3eae18c39d0"} Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.028415 4833 generic.go:334] "Generic (PLEG): container finished" podID="78f84e8d-9afd-4bdc-9dde-cb633b5d7083" containerID="1611824dd08630a7b246b0f455581a5582ea29bc33214be853d2e64361223650" exitCode=0 Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.028780 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78fb-account-create-update-q65b4" event={"ID":"78f84e8d-9afd-4bdc-9dde-cb633b5d7083","Type":"ContainerDied","Data":"1611824dd08630a7b246b0f455581a5582ea29bc33214be853d2e64361223650"} Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.028829 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78fb-account-create-update-q65b4" event={"ID":"78f84e8d-9afd-4bdc-9dde-cb633b5d7083","Type":"ContainerStarted","Data":"eb5fc63e30cfab5e2028f48843e25460f46f643b3a42d3ddd90e7ed2f5314a8f"} Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.358097 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-wrhrc"] Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.363848 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-wrhrc"] Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.472249 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.561846 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-987b-account-create-update-qf697" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.567156 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-scripts\") pod \"46126eda-f691-4339-966c-615190176dea\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.567202 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0fb7215-a577-4756-8363-a4cba291a804-operator-scripts\") pod \"c0fb7215-a577-4756-8363-a4cba291a804\" (UID: \"c0fb7215-a577-4756-8363-a4cba291a804\") " Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.567262 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r26n\" (UniqueName: \"kubernetes.io/projected/c0fb7215-a577-4756-8363-a4cba291a804-kube-api-access-6r26n\") pod \"c0fb7215-a577-4756-8363-a4cba291a804\" (UID: \"c0fb7215-a577-4756-8363-a4cba291a804\") " Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.567284 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-ring-data-devices\") pod \"46126eda-f691-4339-966c-615190176dea\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.567351 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46126eda-f691-4339-966c-615190176dea-etc-swift\") pod \"46126eda-f691-4339-966c-615190176dea\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.567390 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-swiftconf\") pod \"46126eda-f691-4339-966c-615190176dea\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.567421 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-dispersionconf\") pod \"46126eda-f691-4339-966c-615190176dea\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.567456 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clvwz\" (UniqueName: \"kubernetes.io/projected/46126eda-f691-4339-966c-615190176dea-kube-api-access-clvwz\") pod \"46126eda-f691-4339-966c-615190176dea\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.567517 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-combined-ca-bundle\") pod \"46126eda-f691-4339-966c-615190176dea\" (UID: \"46126eda-f691-4339-966c-615190176dea\") " Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.567933 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0fb7215-a577-4756-8363-a4cba291a804-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0fb7215-a577-4756-8363-a4cba291a804" (UID: "c0fb7215-a577-4756-8363-a4cba291a804"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.568769 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-svb9r" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.568874 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "46126eda-f691-4339-966c-615190176dea" (UID: "46126eda-f691-4339-966c-615190176dea"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.569414 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46126eda-f691-4339-966c-615190176dea-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "46126eda-f691-4339-966c-615190176dea" (UID: "46126eda-f691-4339-966c-615190176dea"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.572956 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0fb7215-a577-4756-8363-a4cba291a804-kube-api-access-6r26n" (OuterVolumeSpecName: "kube-api-access-6r26n") pod "c0fb7215-a577-4756-8363-a4cba291a804" (UID: "c0fb7215-a577-4756-8363-a4cba291a804"). InnerVolumeSpecName "kube-api-access-6r26n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.575765 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46126eda-f691-4339-966c-615190176dea-kube-api-access-clvwz" (OuterVolumeSpecName: "kube-api-access-clvwz") pod "46126eda-f691-4339-966c-615190176dea" (UID: "46126eda-f691-4339-966c-615190176dea"). InnerVolumeSpecName "kube-api-access-clvwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.595435 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "46126eda-f691-4339-966c-615190176dea" (UID: "46126eda-f691-4339-966c-615190176dea"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.602192 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-scripts" (OuterVolumeSpecName: "scripts") pod "46126eda-f691-4339-966c-615190176dea" (UID: "46126eda-f691-4339-966c-615190176dea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.611310 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "46126eda-f691-4339-966c-615190176dea" (UID: "46126eda-f691-4339-966c-615190176dea"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.614458 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46126eda-f691-4339-966c-615190176dea" (UID: "46126eda-f691-4339-966c-615190176dea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.668745 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07c210e9-a412-4019-b8ae-2bf67642661a-operator-scripts\") pod \"07c210e9-a412-4019-b8ae-2bf67642661a\" (UID: \"07c210e9-a412-4019-b8ae-2bf67642661a\") " Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.668922 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9j2l\" (UniqueName: \"kubernetes.io/projected/07c210e9-a412-4019-b8ae-2bf67642661a-kube-api-access-v9j2l\") pod \"07c210e9-a412-4019-b8ae-2bf67642661a\" (UID: \"07c210e9-a412-4019-b8ae-2bf67642661a\") " Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.669236 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07c210e9-a412-4019-b8ae-2bf67642661a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07c210e9-a412-4019-b8ae-2bf67642661a" (UID: "07c210e9-a412-4019-b8ae-2bf67642661a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.669465 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clvwz\" (UniqueName: \"kubernetes.io/projected/46126eda-f691-4339-966c-615190176dea-kube-api-access-clvwz\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.669485 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.669520 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.669530 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0fb7215-a577-4756-8363-a4cba291a804-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.669539 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r26n\" (UniqueName: \"kubernetes.io/projected/c0fb7215-a577-4756-8363-a4cba291a804-kube-api-access-6r26n\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.669548 4833 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/46126eda-f691-4339-966c-615190176dea-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.669557 4833 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/46126eda-f691-4339-966c-615190176dea-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.669567 4833 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.669575 4833 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/46126eda-f691-4339-966c-615190176dea-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.669584 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07c210e9-a412-4019-b8ae-2bf67642661a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.673691 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c210e9-a412-4019-b8ae-2bf67642661a-kube-api-access-v9j2l" (OuterVolumeSpecName: "kube-api-access-v9j2l") pod "07c210e9-a412-4019-b8ae-2bf67642661a" (UID: "07c210e9-a412-4019-b8ae-2bf67642661a"). InnerVolumeSpecName "kube-api-access-v9j2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:12 crc kubenswrapper[4833]: I0219 13:04:12.771076 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9j2l\" (UniqueName: \"kubernetes.io/projected/07c210e9-a412-4019-b8ae-2bf67642661a-kube-api-access-v9j2l\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.052012 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-svb9r" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.052023 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-svb9r" event={"ID":"07c210e9-a412-4019-b8ae-2bf67642661a","Type":"ContainerDied","Data":"84e5c5c329dad72bf2d53740a13bc65e49e5f9ebc39d46952910ef68cfdb8307"} Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.052176 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84e5c5c329dad72bf2d53740a13bc65e49e5f9ebc39d46952910ef68cfdb8307" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.058260 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-p2m8p" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.058253 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-p2m8p" event={"ID":"46126eda-f691-4339-966c-615190176dea","Type":"ContainerDied","Data":"9b4b57f99fd475671ebee262a01baa244b5ca47568e8fc63d699195238e87b19"} Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.058438 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b4b57f99fd475671ebee262a01baa244b5ca47568e8fc63d699195238e87b19" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.065608 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-987b-account-create-update-qf697" event={"ID":"c0fb7215-a577-4756-8363-a4cba291a804","Type":"ContainerDied","Data":"9f4e15b9d9b6282490e216cfc7a7989241ec71226b7296e700f0c3a8e5ed826a"} Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.065684 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f4e15b9d9b6282490e216cfc7a7989241ec71226b7296e700f0c3a8e5ed826a" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.065905 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-987b-account-create-update-qf697" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.564052 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78fb-account-create-update-q65b4" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.596025 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gprbw\" (UniqueName: \"kubernetes.io/projected/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-kube-api-access-gprbw\") pod \"78f84e8d-9afd-4bdc-9dde-cb633b5d7083\" (UID: \"78f84e8d-9afd-4bdc-9dde-cb633b5d7083\") " Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.596073 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-operator-scripts\") pod \"78f84e8d-9afd-4bdc-9dde-cb633b5d7083\" (UID: \"78f84e8d-9afd-4bdc-9dde-cb633b5d7083\") " Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.596904 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78f84e8d-9afd-4bdc-9dde-cb633b5d7083" (UID: "78f84e8d-9afd-4bdc-9dde-cb633b5d7083"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.606297 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-kube-api-access-gprbw" (OuterVolumeSpecName: "kube-api-access-gprbw") pod "78f84e8d-9afd-4bdc-9dde-cb633b5d7083" (UID: "78f84e8d-9afd-4bdc-9dde-cb633b5d7083"). InnerVolumeSpecName "kube-api-access-gprbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.689433 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s67c7" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.696520 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-scjhz" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.697254 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.697331 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gprbw\" (UniqueName: \"kubernetes.io/projected/78f84e8d-9afd-4bdc-9dde-cb633b5d7083-kube-api-access-gprbw\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.710839 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e4e-account-create-update-stpgm" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.797928 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-operator-scripts\") pod \"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0\" (UID: \"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0\") " Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.797999 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r7x6\" (UniqueName: \"kubernetes.io/projected/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-kube-api-access-6r7x6\") pod \"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0\" (UID: \"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0\") " Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.798041 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqn8l\" (UniqueName: \"kubernetes.io/projected/22d6b7eb-aa56-45d5-84c2-97d3732916db-kube-api-access-pqn8l\") pod \"22d6b7eb-aa56-45d5-84c2-97d3732916db\" (UID: \"22d6b7eb-aa56-45d5-84c2-97d3732916db\") " Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.798076 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-operator-scripts\") pod \"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a\" (UID: \"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a\") " Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.798159 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d6b7eb-aa56-45d5-84c2-97d3732916db-operator-scripts\") pod \"22d6b7eb-aa56-45d5-84c2-97d3732916db\" (UID: \"22d6b7eb-aa56-45d5-84c2-97d3732916db\") " Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.798180 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pll8h\" (UniqueName: \"kubernetes.io/projected/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-kube-api-access-pll8h\") pod \"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a\" (UID: \"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a\") " Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.798995 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a52037a8-0c1c-4c36-b1ca-f90b67f83ff0" (UID: "a52037a8-0c1c-4c36-b1ca-f90b67f83ff0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.799025 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09b26dd3-c5dd-447f-a2ef-6d92e5ad231a" (UID: "09b26dd3-c5dd-447f-a2ef-6d92e5ad231a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.799723 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22d6b7eb-aa56-45d5-84c2-97d3732916db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22d6b7eb-aa56-45d5-84c2-97d3732916db" (UID: "22d6b7eb-aa56-45d5-84c2-97d3732916db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.805210 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-kube-api-access-pll8h" (OuterVolumeSpecName: "kube-api-access-pll8h") pod "09b26dd3-c5dd-447f-a2ef-6d92e5ad231a" (UID: "09b26dd3-c5dd-447f-a2ef-6d92e5ad231a"). InnerVolumeSpecName "kube-api-access-pll8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.806524 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22d6b7eb-aa56-45d5-84c2-97d3732916db-kube-api-access-pqn8l" (OuterVolumeSpecName: "kube-api-access-pqn8l") pod "22d6b7eb-aa56-45d5-84c2-97d3732916db" (UID: "22d6b7eb-aa56-45d5-84c2-97d3732916db"). InnerVolumeSpecName "kube-api-access-pqn8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.806800 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-kube-api-access-6r7x6" (OuterVolumeSpecName: "kube-api-access-6r7x6") pod "a52037a8-0c1c-4c36-b1ca-f90b67f83ff0" (UID: "a52037a8-0c1c-4c36-b1ca-f90b67f83ff0"). InnerVolumeSpecName "kube-api-access-6r7x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.900730 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.900774 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r7x6\" (UniqueName: \"kubernetes.io/projected/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0-kube-api-access-6r7x6\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.900790 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqn8l\" (UniqueName: \"kubernetes.io/projected/22d6b7eb-aa56-45d5-84c2-97d3732916db-kube-api-access-pqn8l\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.900802 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.900813 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22d6b7eb-aa56-45d5-84c2-97d3732916db-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:13 crc kubenswrapper[4833]: I0219 13:04:13.900825 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pll8h\" (UniqueName: \"kubernetes.io/projected/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a-kube-api-access-pll8h\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.076268 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0e4e-account-create-update-stpgm" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.076248 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0e4e-account-create-update-stpgm" event={"ID":"09b26dd3-c5dd-447f-a2ef-6d92e5ad231a","Type":"ContainerDied","Data":"3787dd1acd06efd6e21c74739b63226285d3e74c09257a3466e904b3a02eb9ca"} Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.076427 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3787dd1acd06efd6e21c74739b63226285d3e74c09257a3466e904b3a02eb9ca" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.079257 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78fb-account-create-update-q65b4" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.079272 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78fb-account-create-update-q65b4" event={"ID":"78f84e8d-9afd-4bdc-9dde-cb633b5d7083","Type":"ContainerDied","Data":"eb5fc63e30cfab5e2028f48843e25460f46f643b3a42d3ddd90e7ed2f5314a8f"} Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.079324 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb5fc63e30cfab5e2028f48843e25460f46f643b3a42d3ddd90e7ed2f5314a8f" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.081688 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-scjhz" event={"ID":"22d6b7eb-aa56-45d5-84c2-97d3732916db","Type":"ContainerDied","Data":"30379b7f71821544b15a5b1b474262de90d96bf357364bc8baf5d05e746ea7fc"} Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.081727 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30379b7f71821544b15a5b1b474262de90d96bf357364bc8baf5d05e746ea7fc" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.081777 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-scjhz" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.083871 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-s67c7" event={"ID":"a52037a8-0c1c-4c36-b1ca-f90b67f83ff0","Type":"ContainerDied","Data":"6cc06c969f685d22ffacda62f13d3ef4e2e05bdfb2e332ff9da402540dd6348d"} Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.083894 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc06c969f685d22ffacda62f13d3ef4e2e05bdfb2e332ff9da402540dd6348d" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.083969 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-s67c7" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.245941 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-nl4pd"] Feb 19 13:04:14 crc kubenswrapper[4833]: E0219 13:04:14.246313 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a52037a8-0c1c-4c36-b1ca-f90b67f83ff0" containerName="mariadb-database-create" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246337 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a52037a8-0c1c-4c36-b1ca-f90b67f83ff0" containerName="mariadb-database-create" Feb 19 13:04:14 crc kubenswrapper[4833]: E0219 13:04:14.246354 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46126eda-f691-4339-966c-615190176dea" containerName="swift-ring-rebalance" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246363 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="46126eda-f691-4339-966c-615190176dea" containerName="swift-ring-rebalance" Feb 19 13:04:14 crc kubenswrapper[4833]: E0219 13:04:14.246379 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22d6b7eb-aa56-45d5-84c2-97d3732916db" containerName="mariadb-database-create" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246386 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="22d6b7eb-aa56-45d5-84c2-97d3732916db" containerName="mariadb-database-create" Feb 19 13:04:14 crc kubenswrapper[4833]: E0219 13:04:14.246396 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c210e9-a412-4019-b8ae-2bf67642661a" containerName="mariadb-database-create" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246404 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c210e9-a412-4019-b8ae-2bf67642661a" containerName="mariadb-database-create" Feb 19 13:04:14 crc kubenswrapper[4833]: E0219 13:04:14.246416 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0fb7215-a577-4756-8363-a4cba291a804" containerName="mariadb-account-create-update" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246424 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0fb7215-a577-4756-8363-a4cba291a804" containerName="mariadb-account-create-update" Feb 19 13:04:14 crc kubenswrapper[4833]: E0219 13:04:14.246435 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b26dd3-c5dd-447f-a2ef-6d92e5ad231a" containerName="mariadb-account-create-update" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246442 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b26dd3-c5dd-447f-a2ef-6d92e5ad231a" containerName="mariadb-account-create-update" Feb 19 13:04:14 crc kubenswrapper[4833]: E0219 13:04:14.246464 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f84e8d-9afd-4bdc-9dde-cb633b5d7083" containerName="mariadb-account-create-update" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246471 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f84e8d-9afd-4bdc-9dde-cb633b5d7083" containerName="mariadb-account-create-update" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246720 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="78f84e8d-9afd-4bdc-9dde-cb633b5d7083" containerName="mariadb-account-create-update" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246740 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a52037a8-0c1c-4c36-b1ca-f90b67f83ff0" containerName="mariadb-database-create" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246750 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="22d6b7eb-aa56-45d5-84c2-97d3732916db" containerName="mariadb-database-create" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246768 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0fb7215-a577-4756-8363-a4cba291a804" containerName="mariadb-account-create-update" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246782 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="46126eda-f691-4339-966c-615190176dea" containerName="swift-ring-rebalance" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246794 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c210e9-a412-4019-b8ae-2bf67642661a" containerName="mariadb-database-create" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.246805 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b26dd3-c5dd-447f-a2ef-6d92e5ad231a" containerName="mariadb-account-create-update" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.247443 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.250448 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6dfnb" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.254245 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.275409 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nl4pd"] Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.310020 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-db-sync-config-data\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.310162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-config-data\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.310350 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6z5v\" (UniqueName: \"kubernetes.io/projected/8ab56183-f7ec-44a3-95db-66064f67a074-kube-api-access-z6z5v\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.310385 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-combined-ca-bundle\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.323609 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="664b0c3e-bb34-4357-a07b-418b3b890ec3" path="/var/lib/kubelet/pods/664b0c3e-bb34-4357-a07b-418b3b890ec3/volumes" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.412041 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-config-data\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.412154 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6z5v\" (UniqueName: \"kubernetes.io/projected/8ab56183-f7ec-44a3-95db-66064f67a074-kube-api-access-z6z5v\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.412180 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-combined-ca-bundle\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.412268 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-db-sync-config-data\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.416323 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-config-data\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.416449 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-combined-ca-bundle\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.417047 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-db-sync-config-data\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.439559 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6z5v\" (UniqueName: \"kubernetes.io/projected/8ab56183-f7ec-44a3-95db-66064f67a074-kube-api-access-z6z5v\") pod \"glance-db-sync-nl4pd\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:14 crc kubenswrapper[4833]: I0219 13:04:14.577286 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:15 crc kubenswrapper[4833]: I0219 13:04:15.093483 4833 generic.go:334] "Generic (PLEG): container finished" podID="9c07579b-ab54-4267-83d6-1d6c0404ba3e" containerID="dba3d1413758072442d9ddfac05eb89afbc310bb7af8b791a2712c2c48b11986" exitCode=0 Feb 19 13:04:15 crc kubenswrapper[4833]: I0219 13:04:15.093578 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c07579b-ab54-4267-83d6-1d6c0404ba3e","Type":"ContainerDied","Data":"dba3d1413758072442d9ddfac05eb89afbc310bb7af8b791a2712c2c48b11986"} Feb 19 13:04:15 crc kubenswrapper[4833]: I0219 13:04:15.282570 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-nl4pd"] Feb 19 13:04:16 crc kubenswrapper[4833]: I0219 13:04:16.106379 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nl4pd" event={"ID":"8ab56183-f7ec-44a3-95db-66064f67a074","Type":"ContainerStarted","Data":"ad78f43e82da394ad4b115bd4f554604c500530adec1b729dc42400390d4ef25"} Feb 19 13:04:16 crc kubenswrapper[4833]: I0219 13:04:16.110440 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c07579b-ab54-4267-83d6-1d6c0404ba3e","Type":"ContainerStarted","Data":"24555c0362540584930eb2b0f67b08f0230f630668eeb48fbcb6a570f233da57"} Feb 19 13:04:16 crc kubenswrapper[4833]: I0219 13:04:16.111606 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 13:04:16 crc kubenswrapper[4833]: I0219 13:04:16.147415 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.523667188 podStartE2EDuration="1m2.147397006s" podCreationTimestamp="2026-02-19 13:03:14 +0000 UTC" firstStartedPulling="2026-02-19 13:03:16.401937912 +0000 UTC m=+1006.797456680" lastFinishedPulling="2026-02-19 13:03:41.02566773 +0000 UTC m=+1031.421186498" observedRunningTime="2026-02-19 13:04:16.140667054 +0000 UTC m=+1066.536185842" watchObservedRunningTime="2026-02-19 13:04:16.147397006 +0000 UTC m=+1066.542915774" Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.382084 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hhm6g"] Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.383847 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hhm6g" Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.394182 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hhm6g"] Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.396980 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.569911 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8906713d-1f2f-46ef-9895-b4a2cc5fddff-operator-scripts\") pod \"root-account-create-update-hhm6g\" (UID: \"8906713d-1f2f-46ef-9895-b4a2cc5fddff\") " pod="openstack/root-account-create-update-hhm6g" Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.570320 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gbr8\" (UniqueName: \"kubernetes.io/projected/8906713d-1f2f-46ef-9895-b4a2cc5fddff-kube-api-access-4gbr8\") pod \"root-account-create-update-hhm6g\" (UID: \"8906713d-1f2f-46ef-9895-b4a2cc5fddff\") " pod="openstack/root-account-create-update-hhm6g" Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.672280 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8906713d-1f2f-46ef-9895-b4a2cc5fddff-operator-scripts\") pod \"root-account-create-update-hhm6g\" (UID: \"8906713d-1f2f-46ef-9895-b4a2cc5fddff\") " pod="openstack/root-account-create-update-hhm6g" Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.672351 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gbr8\" (UniqueName: \"kubernetes.io/projected/8906713d-1f2f-46ef-9895-b4a2cc5fddff-kube-api-access-4gbr8\") pod \"root-account-create-update-hhm6g\" (UID: \"8906713d-1f2f-46ef-9895-b4a2cc5fddff\") " pod="openstack/root-account-create-update-hhm6g" Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.673383 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8906713d-1f2f-46ef-9895-b4a2cc5fddff-operator-scripts\") pod \"root-account-create-update-hhm6g\" (UID: \"8906713d-1f2f-46ef-9895-b4a2cc5fddff\") " pod="openstack/root-account-create-update-hhm6g" Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.694080 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gbr8\" (UniqueName: \"kubernetes.io/projected/8906713d-1f2f-46ef-9895-b4a2cc5fddff-kube-api-access-4gbr8\") pod \"root-account-create-update-hhm6g\" (UID: \"8906713d-1f2f-46ef-9895-b4a2cc5fddff\") " pod="openstack/root-account-create-update-hhm6g" Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.709801 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hhm6g" Feb 19 13:04:17 crc kubenswrapper[4833]: I0219 13:04:17.972618 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hhm6g"] Feb 19 13:04:18 crc kubenswrapper[4833]: I0219 13:04:18.129592 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hhm6g" event={"ID":"8906713d-1f2f-46ef-9895-b4a2cc5fddff","Type":"ContainerStarted","Data":"f3d827c23b196678da8d9f2ff05458a9fbebe122f18aa518f8e7d4ff0a0f0726"} Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.140925 4833 generic.go:334] "Generic (PLEG): container finished" podID="8906713d-1f2f-46ef-9895-b4a2cc5fddff" containerID="a2d5cefe73d91412b5c95c3a584bfca4b5de6e1fa70c2cfa27c36895c12203b2" exitCode=0 Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.140964 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hhm6g" event={"ID":"8906713d-1f2f-46ef-9895-b4a2cc5fddff","Type":"ContainerDied","Data":"a2d5cefe73d91412b5c95c3a584bfca4b5de6e1fa70c2cfa27c36895c12203b2"} Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.382754 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-fbrgv" podUID="488bba31-e718-4ef1-bd04-6ed3fe165c89" containerName="ovn-controller" probeResult="failure" output=< Feb 19 13:04:19 crc kubenswrapper[4833]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 19 13:04:19 crc kubenswrapper[4833]: > Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.403445 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.423129 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-9jlg7" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.636710 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-fbrgv-config-6kxq4"] Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.637996 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.643200 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.654579 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fbrgv-config-6kxq4"] Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.805636 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-scripts\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.805685 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run-ovn\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.805710 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnjvd\" (UniqueName: \"kubernetes.io/projected/99105995-a7f9-4667-97a1-b28259252d16-kube-api-access-lnjvd\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.805750 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.805779 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-additional-scripts\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.805841 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-log-ovn\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.907448 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-log-ovn\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.907841 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-log-ovn\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.910475 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-scripts\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.910653 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-scripts\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.910776 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run-ovn\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.910829 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnjvd\" (UniqueName: \"kubernetes.io/projected/99105995-a7f9-4667-97a1-b28259252d16-kube-api-access-lnjvd\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.910907 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.910965 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-additional-scripts\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.910961 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run-ovn\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.911042 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.911646 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-additional-scripts\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.929405 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnjvd\" (UniqueName: \"kubernetes.io/projected/99105995-a7f9-4667-97a1-b28259252d16-kube-api-access-lnjvd\") pod \"ovn-controller-fbrgv-config-6kxq4\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:19 crc kubenswrapper[4833]: I0219 13:04:19.954145 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:20 crc kubenswrapper[4833]: I0219 13:04:20.423747 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-fbrgv-config-6kxq4"] Feb 19 13:04:20 crc kubenswrapper[4833]: I0219 13:04:20.496100 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hhm6g" Feb 19 13:04:20 crc kubenswrapper[4833]: I0219 13:04:20.629866 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gbr8\" (UniqueName: \"kubernetes.io/projected/8906713d-1f2f-46ef-9895-b4a2cc5fddff-kube-api-access-4gbr8\") pod \"8906713d-1f2f-46ef-9895-b4a2cc5fddff\" (UID: \"8906713d-1f2f-46ef-9895-b4a2cc5fddff\") " Feb 19 13:04:20 crc kubenswrapper[4833]: I0219 13:04:20.630134 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8906713d-1f2f-46ef-9895-b4a2cc5fddff-operator-scripts\") pod \"8906713d-1f2f-46ef-9895-b4a2cc5fddff\" (UID: \"8906713d-1f2f-46ef-9895-b4a2cc5fddff\") " Feb 19 13:04:20 crc kubenswrapper[4833]: I0219 13:04:20.631068 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8906713d-1f2f-46ef-9895-b4a2cc5fddff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8906713d-1f2f-46ef-9895-b4a2cc5fddff" (UID: "8906713d-1f2f-46ef-9895-b4a2cc5fddff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:20 crc kubenswrapper[4833]: I0219 13:04:20.640701 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8906713d-1f2f-46ef-9895-b4a2cc5fddff-kube-api-access-4gbr8" (OuterVolumeSpecName: "kube-api-access-4gbr8") pod "8906713d-1f2f-46ef-9895-b4a2cc5fddff" (UID: "8906713d-1f2f-46ef-9895-b4a2cc5fddff"). InnerVolumeSpecName "kube-api-access-4gbr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:20 crc kubenswrapper[4833]: I0219 13:04:20.732193 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gbr8\" (UniqueName: \"kubernetes.io/projected/8906713d-1f2f-46ef-9895-b4a2cc5fddff-kube-api-access-4gbr8\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:20 crc kubenswrapper[4833]: I0219 13:04:20.732454 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8906713d-1f2f-46ef-9895-b4a2cc5fddff-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:21 crc kubenswrapper[4833]: I0219 13:04:21.157807 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hhm6g" Feb 19 13:04:21 crc kubenswrapper[4833]: I0219 13:04:21.157807 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hhm6g" event={"ID":"8906713d-1f2f-46ef-9895-b4a2cc5fddff","Type":"ContainerDied","Data":"f3d827c23b196678da8d9f2ff05458a9fbebe122f18aa518f8e7d4ff0a0f0726"} Feb 19 13:04:21 crc kubenswrapper[4833]: I0219 13:04:21.157959 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3d827c23b196678da8d9f2ff05458a9fbebe122f18aa518f8e7d4ff0a0f0726" Feb 19 13:04:21 crc kubenswrapper[4833]: I0219 13:04:21.159467 4833 generic.go:334] "Generic (PLEG): container finished" podID="99105995-a7f9-4667-97a1-b28259252d16" containerID="574072301f5e178d53718c251aae57b00a945a04d898c95c03429952b49d93c2" exitCode=0 Feb 19 13:04:21 crc kubenswrapper[4833]: I0219 13:04:21.159525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fbrgv-config-6kxq4" event={"ID":"99105995-a7f9-4667-97a1-b28259252d16","Type":"ContainerDied","Data":"574072301f5e178d53718c251aae57b00a945a04d898c95c03429952b49d93c2"} Feb 19 13:04:21 crc kubenswrapper[4833]: I0219 13:04:21.159565 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fbrgv-config-6kxq4" event={"ID":"99105995-a7f9-4667-97a1-b28259252d16","Type":"ContainerStarted","Data":"fab6e0326ec09deeba09213d328775717168fe1e6b67e53e6bca6aee1b9149b8"} Feb 19 13:04:24 crc kubenswrapper[4833]: I0219 13:04:24.100529 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:04:24 crc kubenswrapper[4833]: I0219 13:04:24.106799 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0dfc7a49-4c64-4c4c-b0a9-eea1d8734612-etc-swift\") pod \"swift-storage-0\" (UID: \"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612\") " pod="openstack/swift-storage-0" Feb 19 13:04:24 crc kubenswrapper[4833]: I0219 13:04:24.332561 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 19 13:04:24 crc kubenswrapper[4833]: I0219 13:04:24.382518 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-fbrgv" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.091787 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.436773 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ql56p"] Feb 19 13:04:26 crc kubenswrapper[4833]: E0219 13:04:26.437519 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8906713d-1f2f-46ef-9895-b4a2cc5fddff" containerName="mariadb-account-create-update" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.437539 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8906713d-1f2f-46ef-9895-b4a2cc5fddff" containerName="mariadb-account-create-update" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.437717 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8906713d-1f2f-46ef-9895-b4a2cc5fddff" containerName="mariadb-account-create-update" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.438333 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ql56p" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.477578 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ql56p"] Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.544440 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d908d190-3ee3-4403-90d3-8635493b7b6c-operator-scripts\") pod \"cinder-db-create-ql56p\" (UID: \"d908d190-3ee3-4403-90d3-8635493b7b6c\") " pod="openstack/cinder-db-create-ql56p" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.544579 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjdb\" (UniqueName: \"kubernetes.io/projected/d908d190-3ee3-4403-90d3-8635493b7b6c-kube-api-access-2rjdb\") pod \"cinder-db-create-ql56p\" (UID: \"d908d190-3ee3-4403-90d3-8635493b7b6c\") " pod="openstack/cinder-db-create-ql56p" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.637541 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c4b6-account-create-update-58cbw"] Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.638512 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4b6-account-create-update-58cbw" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.640923 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.646025 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d908d190-3ee3-4403-90d3-8635493b7b6c-operator-scripts\") pod \"cinder-db-create-ql56p\" (UID: \"d908d190-3ee3-4403-90d3-8635493b7b6c\") " pod="openstack/cinder-db-create-ql56p" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.646099 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rjdb\" (UniqueName: \"kubernetes.io/projected/d908d190-3ee3-4403-90d3-8635493b7b6c-kube-api-access-2rjdb\") pod \"cinder-db-create-ql56p\" (UID: \"d908d190-3ee3-4403-90d3-8635493b7b6c\") " pod="openstack/cinder-db-create-ql56p" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.646740 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d908d190-3ee3-4403-90d3-8635493b7b6c-operator-scripts\") pod \"cinder-db-create-ql56p\" (UID: \"d908d190-3ee3-4403-90d3-8635493b7b6c\") " pod="openstack/cinder-db-create-ql56p" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.651703 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c4b6-account-create-update-58cbw"] Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.709321 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rjdb\" (UniqueName: \"kubernetes.io/projected/d908d190-3ee3-4403-90d3-8635493b7b6c-kube-api-access-2rjdb\") pod \"cinder-db-create-ql56p\" (UID: \"d908d190-3ee3-4403-90d3-8635493b7b6c\") " pod="openstack/cinder-db-create-ql56p" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.727260 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-85x95"] Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.728271 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85x95" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.745683 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-85x95"] Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.749297 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfrxk\" (UniqueName: \"kubernetes.io/projected/2074acd6-4d68-4d93-84af-a608758fddd0-kube-api-access-dfrxk\") pod \"cinder-c4b6-account-create-update-58cbw\" (UID: \"2074acd6-4d68-4d93-84af-a608758fddd0\") " pod="openstack/cinder-c4b6-account-create-update-58cbw" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.749365 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2074acd6-4d68-4d93-84af-a608758fddd0-operator-scripts\") pod \"cinder-c4b6-account-create-update-58cbw\" (UID: \"2074acd6-4d68-4d93-84af-a608758fddd0\") " pod="openstack/cinder-c4b6-account-create-update-58cbw" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.759944 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ql56p" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.809572 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-lgbvr"] Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.810920 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lgbvr" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.818026 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-h5szg"] Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.819069 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.823798 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.823841 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lgbvr"] Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.824091 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.824301 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lhmgj" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.824469 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.837578 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-h5szg"] Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.853828 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2074acd6-4d68-4d93-84af-a608758fddd0-operator-scripts\") pod \"cinder-c4b6-account-create-update-58cbw\" (UID: \"2074acd6-4d68-4d93-84af-a608758fddd0\") " pod="openstack/cinder-c4b6-account-create-update-58cbw" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.853905 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfj6n\" (UniqueName: \"kubernetes.io/projected/c2424429-364d-4bce-b9da-32ea56eae279-kube-api-access-wfj6n\") pod \"barbican-db-create-85x95\" (UID: \"c2424429-364d-4bce-b9da-32ea56eae279\") " pod="openstack/barbican-db-create-85x95" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.853936 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2424429-364d-4bce-b9da-32ea56eae279-operator-scripts\") pod \"barbican-db-create-85x95\" (UID: \"c2424429-364d-4bce-b9da-32ea56eae279\") " pod="openstack/barbican-db-create-85x95" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.854014 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfrxk\" (UniqueName: \"kubernetes.io/projected/2074acd6-4d68-4d93-84af-a608758fddd0-kube-api-access-dfrxk\") pod \"cinder-c4b6-account-create-update-58cbw\" (UID: \"2074acd6-4d68-4d93-84af-a608758fddd0\") " pod="openstack/cinder-c4b6-account-create-update-58cbw" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.863067 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2074acd6-4d68-4d93-84af-a608758fddd0-operator-scripts\") pod \"cinder-c4b6-account-create-update-58cbw\" (UID: \"2074acd6-4d68-4d93-84af-a608758fddd0\") " pod="openstack/cinder-c4b6-account-create-update-58cbw" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.871413 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfrxk\" (UniqueName: \"kubernetes.io/projected/2074acd6-4d68-4d93-84af-a608758fddd0-kube-api-access-dfrxk\") pod \"cinder-c4b6-account-create-update-58cbw\" (UID: \"2074acd6-4d68-4d93-84af-a608758fddd0\") " pod="openstack/cinder-c4b6-account-create-update-58cbw" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.928670 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b003-account-create-update-f27jp"] Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.929895 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b003-account-create-update-f27jp" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.934586 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.940488 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b003-account-create-update-f27jp"] Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.957466 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4b6-account-create-update-58cbw" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.958775 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2l84\" (UniqueName: \"kubernetes.io/projected/c4a33236-11e5-4757-a143-f57fd4f5a5f4-kube-api-access-f2l84\") pod \"neutron-db-create-lgbvr\" (UID: \"c4a33236-11e5-4757-a143-f57fd4f5a5f4\") " pod="openstack/neutron-db-create-lgbvr" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.959013 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a33236-11e5-4757-a143-f57fd4f5a5f4-operator-scripts\") pod \"neutron-db-create-lgbvr\" (UID: \"c4a33236-11e5-4757-a143-f57fd4f5a5f4\") " pod="openstack/neutron-db-create-lgbvr" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.959179 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qls5b\" (UniqueName: \"kubernetes.io/projected/ddecd476-ca49-4043-a064-b769163f4988-kube-api-access-qls5b\") pod \"keystone-db-sync-h5szg\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.959302 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-config-data\") pod \"keystone-db-sync-h5szg\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.959907 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-combined-ca-bundle\") pod \"keystone-db-sync-h5szg\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.962554 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfj6n\" (UniqueName: \"kubernetes.io/projected/c2424429-364d-4bce-b9da-32ea56eae279-kube-api-access-wfj6n\") pod \"barbican-db-create-85x95\" (UID: \"c2424429-364d-4bce-b9da-32ea56eae279\") " pod="openstack/barbican-db-create-85x95" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.965779 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2424429-364d-4bce-b9da-32ea56eae279-operator-scripts\") pod \"barbican-db-create-85x95\" (UID: \"c2424429-364d-4bce-b9da-32ea56eae279\") " pod="openstack/barbican-db-create-85x95" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.966747 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2424429-364d-4bce-b9da-32ea56eae279-operator-scripts\") pod \"barbican-db-create-85x95\" (UID: \"c2424429-364d-4bce-b9da-32ea56eae279\") " pod="openstack/barbican-db-create-85x95" Feb 19 13:04:26 crc kubenswrapper[4833]: I0219 13:04:26.979716 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfj6n\" (UniqueName: \"kubernetes.io/projected/c2424429-364d-4bce-b9da-32ea56eae279-kube-api-access-wfj6n\") pod \"barbican-db-create-85x95\" (UID: \"c2424429-364d-4bce-b9da-32ea56eae279\") " pod="openstack/barbican-db-create-85x95" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.013261 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3cd9-account-create-update-6dwwd"] Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.015114 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3cd9-account-create-update-6dwwd" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.017887 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.031066 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3cd9-account-create-update-6dwwd"] Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.046232 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85x95" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.068195 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2l84\" (UniqueName: \"kubernetes.io/projected/c4a33236-11e5-4757-a143-f57fd4f5a5f4-kube-api-access-f2l84\") pod \"neutron-db-create-lgbvr\" (UID: \"c4a33236-11e5-4757-a143-f57fd4f5a5f4\") " pod="openstack/neutron-db-create-lgbvr" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.068261 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h55vb\" (UniqueName: \"kubernetes.io/projected/9a5eae49-48c5-4eb3-869c-1af4cea5877d-kube-api-access-h55vb\") pod \"neutron-b003-account-create-update-f27jp\" (UID: \"9a5eae49-48c5-4eb3-869c-1af4cea5877d\") " pod="openstack/neutron-b003-account-create-update-f27jp" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.068289 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a33236-11e5-4757-a143-f57fd4f5a5f4-operator-scripts\") pod \"neutron-db-create-lgbvr\" (UID: \"c4a33236-11e5-4757-a143-f57fd4f5a5f4\") " pod="openstack/neutron-db-create-lgbvr" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.068310 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qls5b\" (UniqueName: \"kubernetes.io/projected/ddecd476-ca49-4043-a064-b769163f4988-kube-api-access-qls5b\") pod \"keystone-db-sync-h5szg\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.068328 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-config-data\") pod \"keystone-db-sync-h5szg\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.068367 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a5eae49-48c5-4eb3-869c-1af4cea5877d-operator-scripts\") pod \"neutron-b003-account-create-update-f27jp\" (UID: \"9a5eae49-48c5-4eb3-869c-1af4cea5877d\") " pod="openstack/neutron-b003-account-create-update-f27jp" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.068420 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-combined-ca-bundle\") pod \"keystone-db-sync-h5szg\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.070247 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a33236-11e5-4757-a143-f57fd4f5a5f4-operator-scripts\") pod \"neutron-db-create-lgbvr\" (UID: \"c4a33236-11e5-4757-a143-f57fd4f5a5f4\") " pod="openstack/neutron-db-create-lgbvr" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.073608 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-combined-ca-bundle\") pod \"keystone-db-sync-h5szg\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.082823 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-config-data\") pod \"keystone-db-sync-h5szg\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.083319 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2l84\" (UniqueName: \"kubernetes.io/projected/c4a33236-11e5-4757-a143-f57fd4f5a5f4-kube-api-access-f2l84\") pod \"neutron-db-create-lgbvr\" (UID: \"c4a33236-11e5-4757-a143-f57fd4f5a5f4\") " pod="openstack/neutron-db-create-lgbvr" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.087149 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qls5b\" (UniqueName: \"kubernetes.io/projected/ddecd476-ca49-4043-a064-b769163f4988-kube-api-access-qls5b\") pod \"keystone-db-sync-h5szg\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.137042 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lgbvr" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.145869 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.170226 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a5eae49-48c5-4eb3-869c-1af4cea5877d-operator-scripts\") pod \"neutron-b003-account-create-update-f27jp\" (UID: \"9a5eae49-48c5-4eb3-869c-1af4cea5877d\") " pod="openstack/neutron-b003-account-create-update-f27jp" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.170347 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-operator-scripts\") pod \"barbican-3cd9-account-create-update-6dwwd\" (UID: \"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351\") " pod="openstack/barbican-3cd9-account-create-update-6dwwd" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.170390 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h55vb\" (UniqueName: \"kubernetes.io/projected/9a5eae49-48c5-4eb3-869c-1af4cea5877d-kube-api-access-h55vb\") pod \"neutron-b003-account-create-update-f27jp\" (UID: \"9a5eae49-48c5-4eb3-869c-1af4cea5877d\") " pod="openstack/neutron-b003-account-create-update-f27jp" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.170410 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sgtc\" (UniqueName: \"kubernetes.io/projected/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-kube-api-access-2sgtc\") pod \"barbican-3cd9-account-create-update-6dwwd\" (UID: \"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351\") " pod="openstack/barbican-3cd9-account-create-update-6dwwd" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.170933 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a5eae49-48c5-4eb3-869c-1af4cea5877d-operator-scripts\") pod \"neutron-b003-account-create-update-f27jp\" (UID: \"9a5eae49-48c5-4eb3-869c-1af4cea5877d\") " pod="openstack/neutron-b003-account-create-update-f27jp" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.187382 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h55vb\" (UniqueName: \"kubernetes.io/projected/9a5eae49-48c5-4eb3-869c-1af4cea5877d-kube-api-access-h55vb\") pod \"neutron-b003-account-create-update-f27jp\" (UID: \"9a5eae49-48c5-4eb3-869c-1af4cea5877d\") " pod="openstack/neutron-b003-account-create-update-f27jp" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.247068 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b003-account-create-update-f27jp" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.272210 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-operator-scripts\") pod \"barbican-3cd9-account-create-update-6dwwd\" (UID: \"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351\") " pod="openstack/barbican-3cd9-account-create-update-6dwwd" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.272278 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sgtc\" (UniqueName: \"kubernetes.io/projected/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-kube-api-access-2sgtc\") pod \"barbican-3cd9-account-create-update-6dwwd\" (UID: \"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351\") " pod="openstack/barbican-3cd9-account-create-update-6dwwd" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.272994 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-operator-scripts\") pod \"barbican-3cd9-account-create-update-6dwwd\" (UID: \"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351\") " pod="openstack/barbican-3cd9-account-create-update-6dwwd" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.288409 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sgtc\" (UniqueName: \"kubernetes.io/projected/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-kube-api-access-2sgtc\") pod \"barbican-3cd9-account-create-update-6dwwd\" (UID: \"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351\") " pod="openstack/barbican-3cd9-account-create-update-6dwwd" Feb 19 13:04:27 crc kubenswrapper[4833]: I0219 13:04:27.339263 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3cd9-account-create-update-6dwwd" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.259247 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-fbrgv-config-6kxq4" event={"ID":"99105995-a7f9-4667-97a1-b28259252d16","Type":"ContainerDied","Data":"fab6e0326ec09deeba09213d328775717168fe1e6b67e53e6bca6aee1b9149b8"} Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.259294 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fab6e0326ec09deeba09213d328775717168fe1e6b67e53e6bca6aee1b9149b8" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.390563 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.528425 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnjvd\" (UniqueName: \"kubernetes.io/projected/99105995-a7f9-4667-97a1-b28259252d16-kube-api-access-lnjvd\") pod \"99105995-a7f9-4667-97a1-b28259252d16\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.528481 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-additional-scripts\") pod \"99105995-a7f9-4667-97a1-b28259252d16\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.528569 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-log-ovn\") pod \"99105995-a7f9-4667-97a1-b28259252d16\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.528603 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run-ovn\") pod \"99105995-a7f9-4667-97a1-b28259252d16\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.528622 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run\") pod \"99105995-a7f9-4667-97a1-b28259252d16\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.528641 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-scripts\") pod \"99105995-a7f9-4667-97a1-b28259252d16\" (UID: \"99105995-a7f9-4667-97a1-b28259252d16\") " Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.528996 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "99105995-a7f9-4667-97a1-b28259252d16" (UID: "99105995-a7f9-4667-97a1-b28259252d16"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.529215 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "99105995-a7f9-4667-97a1-b28259252d16" (UID: "99105995-a7f9-4667-97a1-b28259252d16"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.529281 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run" (OuterVolumeSpecName: "var-run") pod "99105995-a7f9-4667-97a1-b28259252d16" (UID: "99105995-a7f9-4667-97a1-b28259252d16"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.530177 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "99105995-a7f9-4667-97a1-b28259252d16" (UID: "99105995-a7f9-4667-97a1-b28259252d16"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.530418 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-scripts" (OuterVolumeSpecName: "scripts") pod "99105995-a7f9-4667-97a1-b28259252d16" (UID: "99105995-a7f9-4667-97a1-b28259252d16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.537048 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99105995-a7f9-4667-97a1-b28259252d16-kube-api-access-lnjvd" (OuterVolumeSpecName: "kube-api-access-lnjvd") pod "99105995-a7f9-4667-97a1-b28259252d16" (UID: "99105995-a7f9-4667-97a1-b28259252d16"). InnerVolumeSpecName "kube-api-access-lnjvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.630397 4833 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.630429 4833 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.630438 4833 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/99105995-a7f9-4667-97a1-b28259252d16-var-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.630450 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.630462 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnjvd\" (UniqueName: \"kubernetes.io/projected/99105995-a7f9-4667-97a1-b28259252d16-kube-api-access-lnjvd\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.630472 4833 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/99105995-a7f9-4667-97a1-b28259252d16-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:29 crc kubenswrapper[4833]: I0219 13:04:29.956251 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-85x95"] Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.040671 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.144399 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b003-account-create-update-f27jp"] Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.161944 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c4b6-account-create-update-58cbw"] Feb 19 13:04:30 crc kubenswrapper[4833]: W0219 13:04:30.163585 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2074acd6_4d68_4d93_84af_a608758fddd0.slice/crio-16b40aa009585989b14a8800a8f5790750efbad6f01cb07da3917d74bc60e629 WatchSource:0}: Error finding container 16b40aa009585989b14a8800a8f5790750efbad6f01cb07da3917d74bc60e629: Status 404 returned error can't find the container with id 16b40aa009585989b14a8800a8f5790750efbad6f01cb07da3917d74bc60e629 Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.170675 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-h5szg"] Feb 19 13:04:30 crc kubenswrapper[4833]: W0219 13:04:30.171552 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3686d9a5_edf5_4ab9_9c9d_6547cf2c6351.slice/crio-8fe9f30cc4fcf618fd8365ceb870a201266d8c79863f1c738ee14b1f7c6acd89 WatchSource:0}: Error finding container 8fe9f30cc4fcf618fd8365ceb870a201266d8c79863f1c738ee14b1f7c6acd89: Status 404 returned error can't find the container with id 8fe9f30cc4fcf618fd8365ceb870a201266d8c79863f1c738ee14b1f7c6acd89 Feb 19 13:04:30 crc kubenswrapper[4833]: W0219 13:04:30.175519 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddecd476_ca49_4043_a064_b769163f4988.slice/crio-0ee37de6979ba2d05129f156126185534dc1d02e6392f5de31b7cc34f8269101 WatchSource:0}: Error finding container 0ee37de6979ba2d05129f156126185534dc1d02e6392f5de31b7cc34f8269101: Status 404 returned error can't find the container with id 0ee37de6979ba2d05129f156126185534dc1d02e6392f5de31b7cc34f8269101 Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.177788 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3cd9-account-create-update-6dwwd"] Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.289933 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nl4pd" event={"ID":"8ab56183-f7ec-44a3-95db-66064f67a074","Type":"ContainerStarted","Data":"534aa3b9ce24afc9a12522747b2c4a86edd9875760fb692528afb80e8cfa4e84"} Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.291251 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b003-account-create-update-f27jp" event={"ID":"9a5eae49-48c5-4eb3-869c-1af4cea5877d","Type":"ContainerStarted","Data":"191836283d5bc1bb30b5e8d79019289ac5e36d765efaaa53ebb9d1c225a11658"} Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.293056 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-85x95" event={"ID":"c2424429-364d-4bce-b9da-32ea56eae279","Type":"ContainerStarted","Data":"deabc5367851d3c222f1ba7839d84160efb5e1e124544fe5e9a142f0e3f83e3a"} Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.293090 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-85x95" event={"ID":"c2424429-364d-4bce-b9da-32ea56eae279","Type":"ContainerStarted","Data":"f04e75467cdaffc6f071290e2475c9fde10d806bb5dccf14d223bcd73ed0906f"} Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.294621 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3cd9-account-create-update-6dwwd" event={"ID":"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351","Type":"ContainerStarted","Data":"8fe9f30cc4fcf618fd8365ceb870a201266d8c79863f1c738ee14b1f7c6acd89"} Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.295633 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h5szg" event={"ID":"ddecd476-ca49-4043-a064-b769163f4988","Type":"ContainerStarted","Data":"0ee37de6979ba2d05129f156126185534dc1d02e6392f5de31b7cc34f8269101"} Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.297965 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c4b6-account-create-update-58cbw" event={"ID":"2074acd6-4d68-4d93-84af-a608758fddd0","Type":"ContainerStarted","Data":"16b40aa009585989b14a8800a8f5790750efbad6f01cb07da3917d74bc60e629"} Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.307555 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-fbrgv-config-6kxq4" Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.307576 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"79e601430914468641542ab7ed6659e47a05107845b196c4cc6fce6c20b634a8"} Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.309445 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-lgbvr"] Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.315343 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-nl4pd" podStartSLOduration=2.137853492 podStartE2EDuration="16.315324268s" podCreationTimestamp="2026-02-19 13:04:14 +0000 UTC" firstStartedPulling="2026-02-19 13:04:15.285691775 +0000 UTC m=+1065.681210543" lastFinishedPulling="2026-02-19 13:04:29.463162551 +0000 UTC m=+1079.858681319" observedRunningTime="2026-02-19 13:04:30.304072801 +0000 UTC m=+1080.699591569" watchObservedRunningTime="2026-02-19 13:04:30.315324268 +0000 UTC m=+1080.710843036" Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.326117 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-85x95" podStartSLOduration=4.326102043 podStartE2EDuration="4.326102043s" podCreationTimestamp="2026-02-19 13:04:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:30.319606078 +0000 UTC m=+1080.715124846" watchObservedRunningTime="2026-02-19 13:04:30.326102043 +0000 UTC m=+1080.721620811" Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.379343 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ql56p"] Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.492958 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-fbrgv-config-6kxq4"] Feb 19 13:04:30 crc kubenswrapper[4833]: I0219 13:04:30.501632 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-fbrgv-config-6kxq4"] Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.331467 4833 generic.go:334] "Generic (PLEG): container finished" podID="c4a33236-11e5-4757-a143-f57fd4f5a5f4" containerID="09554af6585cf5629a934c5fa37e3596c8b3bfe50fea3741085a4b0bd1a2816e" exitCode=0 Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.331533 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lgbvr" event={"ID":"c4a33236-11e5-4757-a143-f57fd4f5a5f4","Type":"ContainerDied","Data":"09554af6585cf5629a934c5fa37e3596c8b3bfe50fea3741085a4b0bd1a2816e"} Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.331828 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lgbvr" event={"ID":"c4a33236-11e5-4757-a143-f57fd4f5a5f4","Type":"ContainerStarted","Data":"a91b73d42cbe7d68fed877f4b03105dad0a3bbcbcb8e8b63da55d995f5325549"} Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.333963 4833 generic.go:334] "Generic (PLEG): container finished" podID="9a5eae49-48c5-4eb3-869c-1af4cea5877d" containerID="f942a7e907365c058ad89b4344a5751b4456ede58e8f70233ca17deaac34287a" exitCode=0 Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.334030 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b003-account-create-update-f27jp" event={"ID":"9a5eae49-48c5-4eb3-869c-1af4cea5877d","Type":"ContainerDied","Data":"f942a7e907365c058ad89b4344a5751b4456ede58e8f70233ca17deaac34287a"} Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.336349 4833 generic.go:334] "Generic (PLEG): container finished" podID="c2424429-364d-4bce-b9da-32ea56eae279" containerID="deabc5367851d3c222f1ba7839d84160efb5e1e124544fe5e9a142f0e3f83e3a" exitCode=0 Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.336422 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-85x95" event={"ID":"c2424429-364d-4bce-b9da-32ea56eae279","Type":"ContainerDied","Data":"deabc5367851d3c222f1ba7839d84160efb5e1e124544fe5e9a142f0e3f83e3a"} Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.340169 4833 generic.go:334] "Generic (PLEG): container finished" podID="3686d9a5-edf5-4ab9-9c9d-6547cf2c6351" containerID="b1847606985b67e891fa82e74db42582387e187de7c23f539064e004cbe7a384" exitCode=0 Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.340223 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3cd9-account-create-update-6dwwd" event={"ID":"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351","Type":"ContainerDied","Data":"b1847606985b67e891fa82e74db42582387e187de7c23f539064e004cbe7a384"} Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.349177 4833 generic.go:334] "Generic (PLEG): container finished" podID="d908d190-3ee3-4403-90d3-8635493b7b6c" containerID="17b0f8104604bfa715f7fea212a97b385d5a4b5459b23ea5dfce9269801791f2" exitCode=0 Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.349342 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ql56p" event={"ID":"d908d190-3ee3-4403-90d3-8635493b7b6c","Type":"ContainerDied","Data":"17b0f8104604bfa715f7fea212a97b385d5a4b5459b23ea5dfce9269801791f2"} Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.349407 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ql56p" event={"ID":"d908d190-3ee3-4403-90d3-8635493b7b6c","Type":"ContainerStarted","Data":"e02c8f7e13485e15d1b5ebbd344de9893f5d6c229e9b702d10959e12272f0131"} Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.353510 4833 generic.go:334] "Generic (PLEG): container finished" podID="2074acd6-4d68-4d93-84af-a608758fddd0" containerID="b9617bc9bd8666bab0d214327eb3d35127c2cf485f058cfbb3fba9b8acbf2091" exitCode=0 Feb 19 13:04:31 crc kubenswrapper[4833]: I0219 13:04:31.353536 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c4b6-account-create-update-58cbw" event={"ID":"2074acd6-4d68-4d93-84af-a608758fddd0","Type":"ContainerDied","Data":"b9617bc9bd8666bab0d214327eb3d35127c2cf485f058cfbb3fba9b8acbf2091"} Feb 19 13:04:32 crc kubenswrapper[4833]: I0219 13:04:32.332871 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99105995-a7f9-4667-97a1-b28259252d16" path="/var/lib/kubelet/pods/99105995-a7f9-4667-97a1-b28259252d16/volumes" Feb 19 13:04:32 crc kubenswrapper[4833]: I0219 13:04:32.365984 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"4d790934bd31b946c21ac14781fcf543e038eac0810026f02c8d658a00c4f7f0"} Feb 19 13:04:32 crc kubenswrapper[4833]: I0219 13:04:32.366034 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"3972bc7db334d407a7d67588d7f8836bdc04b95394e48d45866bc6a59c611632"} Feb 19 13:04:32 crc kubenswrapper[4833]: I0219 13:04:32.366061 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"cf272762007213a9c88127e1f2ea429aaed1d639b59f032346f4858b58152c64"} Feb 19 13:04:32 crc kubenswrapper[4833]: I0219 13:04:32.366074 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"dfc5d00743c7bac558ecf623d3262475fac8bd18e8ac2154b7f12e79ad90590b"} Feb 19 13:04:33 crc kubenswrapper[4833]: I0219 13:04:33.377583 4833 generic.go:334] "Generic (PLEG): container finished" podID="a356e13b-39de-4d0b-aa58-f2dc6d3179fb" containerID="1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f" exitCode=0 Feb 19 13:04:33 crc kubenswrapper[4833]: I0219 13:04:33.377676 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a356e13b-39de-4d0b-aa58-f2dc6d3179fb","Type":"ContainerDied","Data":"1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f"} Feb 19 13:04:40 crc kubenswrapper[4833]: I0219 13:04:40.688221 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3cd9-account-create-update-6dwwd" Feb 19 13:04:40 crc kubenswrapper[4833]: I0219 13:04:40.770974 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-operator-scripts\") pod \"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351\" (UID: \"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351\") " Feb 19 13:04:40 crc kubenswrapper[4833]: I0219 13:04:40.771156 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sgtc\" (UniqueName: \"kubernetes.io/projected/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-kube-api-access-2sgtc\") pod \"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351\" (UID: \"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351\") " Feb 19 13:04:40 crc kubenswrapper[4833]: I0219 13:04:40.771746 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3686d9a5-edf5-4ab9-9c9d-6547cf2c6351" (UID: "3686d9a5-edf5-4ab9-9c9d-6547cf2c6351"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:40 crc kubenswrapper[4833]: I0219 13:04:40.776582 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-kube-api-access-2sgtc" (OuterVolumeSpecName: "kube-api-access-2sgtc") pod "3686d9a5-edf5-4ab9-9c9d-6547cf2c6351" (UID: "3686d9a5-edf5-4ab9-9c9d-6547cf2c6351"). InnerVolumeSpecName "kube-api-access-2sgtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:40 crc kubenswrapper[4833]: I0219 13:04:40.872814 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sgtc\" (UniqueName: \"kubernetes.io/projected/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-kube-api-access-2sgtc\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:40 crc kubenswrapper[4833]: I0219 13:04:40.872855 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.067270 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ql56p" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.077927 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lgbvr" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.090056 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b003-account-create-update-f27jp" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.090197 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4b6-account-create-update-58cbw" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.101856 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85x95" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.181934 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfj6n\" (UniqueName: \"kubernetes.io/projected/c2424429-364d-4bce-b9da-32ea56eae279-kube-api-access-wfj6n\") pod \"c2424429-364d-4bce-b9da-32ea56eae279\" (UID: \"c2424429-364d-4bce-b9da-32ea56eae279\") " Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.182322 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d908d190-3ee3-4403-90d3-8635493b7b6c-operator-scripts\") pod \"d908d190-3ee3-4403-90d3-8635493b7b6c\" (UID: \"d908d190-3ee3-4403-90d3-8635493b7b6c\") " Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.182358 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a5eae49-48c5-4eb3-869c-1af4cea5877d-operator-scripts\") pod \"9a5eae49-48c5-4eb3-869c-1af4cea5877d\" (UID: \"9a5eae49-48c5-4eb3-869c-1af4cea5877d\") " Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.182392 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfrxk\" (UniqueName: \"kubernetes.io/projected/2074acd6-4d68-4d93-84af-a608758fddd0-kube-api-access-dfrxk\") pod \"2074acd6-4d68-4d93-84af-a608758fddd0\" (UID: \"2074acd6-4d68-4d93-84af-a608758fddd0\") " Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.182439 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2424429-364d-4bce-b9da-32ea56eae279-operator-scripts\") pod \"c2424429-364d-4bce-b9da-32ea56eae279\" (UID: \"c2424429-364d-4bce-b9da-32ea56eae279\") " Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.182458 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a33236-11e5-4757-a143-f57fd4f5a5f4-operator-scripts\") pod \"c4a33236-11e5-4757-a143-f57fd4f5a5f4\" (UID: \"c4a33236-11e5-4757-a143-f57fd4f5a5f4\") " Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.182574 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h55vb\" (UniqueName: \"kubernetes.io/projected/9a5eae49-48c5-4eb3-869c-1af4cea5877d-kube-api-access-h55vb\") pod \"9a5eae49-48c5-4eb3-869c-1af4cea5877d\" (UID: \"9a5eae49-48c5-4eb3-869c-1af4cea5877d\") " Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.182620 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2l84\" (UniqueName: \"kubernetes.io/projected/c4a33236-11e5-4757-a143-f57fd4f5a5f4-kube-api-access-f2l84\") pod \"c4a33236-11e5-4757-a143-f57fd4f5a5f4\" (UID: \"c4a33236-11e5-4757-a143-f57fd4f5a5f4\") " Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.182662 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2074acd6-4d68-4d93-84af-a608758fddd0-operator-scripts\") pod \"2074acd6-4d68-4d93-84af-a608758fddd0\" (UID: \"2074acd6-4d68-4d93-84af-a608758fddd0\") " Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.182704 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rjdb\" (UniqueName: \"kubernetes.io/projected/d908d190-3ee3-4403-90d3-8635493b7b6c-kube-api-access-2rjdb\") pod \"d908d190-3ee3-4403-90d3-8635493b7b6c\" (UID: \"d908d190-3ee3-4403-90d3-8635493b7b6c\") " Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.185808 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2424429-364d-4bce-b9da-32ea56eae279-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c2424429-364d-4bce-b9da-32ea56eae279" (UID: "c2424429-364d-4bce-b9da-32ea56eae279"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.187340 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a5eae49-48c5-4eb3-869c-1af4cea5877d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9a5eae49-48c5-4eb3-869c-1af4cea5877d" (UID: "9a5eae49-48c5-4eb3-869c-1af4cea5877d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.187910 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d908d190-3ee3-4403-90d3-8635493b7b6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d908d190-3ee3-4403-90d3-8635493b7b6c" (UID: "d908d190-3ee3-4403-90d3-8635493b7b6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.188799 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4a33236-11e5-4757-a143-f57fd4f5a5f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4a33236-11e5-4757-a143-f57fd4f5a5f4" (UID: "c4a33236-11e5-4757-a143-f57fd4f5a5f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.189049 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2074acd6-4d68-4d93-84af-a608758fddd0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2074acd6-4d68-4d93-84af-a608758fddd0" (UID: "2074acd6-4d68-4d93-84af-a608758fddd0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.189664 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d908d190-3ee3-4403-90d3-8635493b7b6c-kube-api-access-2rjdb" (OuterVolumeSpecName: "kube-api-access-2rjdb") pod "d908d190-3ee3-4403-90d3-8635493b7b6c" (UID: "d908d190-3ee3-4403-90d3-8635493b7b6c"). InnerVolumeSpecName "kube-api-access-2rjdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.195324 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5eae49-48c5-4eb3-869c-1af4cea5877d-kube-api-access-h55vb" (OuterVolumeSpecName: "kube-api-access-h55vb") pod "9a5eae49-48c5-4eb3-869c-1af4cea5877d" (UID: "9a5eae49-48c5-4eb3-869c-1af4cea5877d"). InnerVolumeSpecName "kube-api-access-h55vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.198532 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2424429-364d-4bce-b9da-32ea56eae279-kube-api-access-wfj6n" (OuterVolumeSpecName: "kube-api-access-wfj6n") pod "c2424429-364d-4bce-b9da-32ea56eae279" (UID: "c2424429-364d-4bce-b9da-32ea56eae279"). InnerVolumeSpecName "kube-api-access-wfj6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.201685 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2074acd6-4d68-4d93-84af-a608758fddd0-kube-api-access-dfrxk" (OuterVolumeSpecName: "kube-api-access-dfrxk") pod "2074acd6-4d68-4d93-84af-a608758fddd0" (UID: "2074acd6-4d68-4d93-84af-a608758fddd0"). InnerVolumeSpecName "kube-api-access-dfrxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.206750 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a33236-11e5-4757-a143-f57fd4f5a5f4-kube-api-access-f2l84" (OuterVolumeSpecName: "kube-api-access-f2l84") pod "c4a33236-11e5-4757-a143-f57fd4f5a5f4" (UID: "c4a33236-11e5-4757-a143-f57fd4f5a5f4"). InnerVolumeSpecName "kube-api-access-f2l84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.284836 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfj6n\" (UniqueName: \"kubernetes.io/projected/c2424429-364d-4bce-b9da-32ea56eae279-kube-api-access-wfj6n\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.284868 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d908d190-3ee3-4403-90d3-8635493b7b6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.284894 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9a5eae49-48c5-4eb3-869c-1af4cea5877d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.284903 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfrxk\" (UniqueName: \"kubernetes.io/projected/2074acd6-4d68-4d93-84af-a608758fddd0-kube-api-access-dfrxk\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.284913 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2424429-364d-4bce-b9da-32ea56eae279-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.284922 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4a33236-11e5-4757-a143-f57fd4f5a5f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.284930 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h55vb\" (UniqueName: \"kubernetes.io/projected/9a5eae49-48c5-4eb3-869c-1af4cea5877d-kube-api-access-h55vb\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.284938 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2l84\" (UniqueName: \"kubernetes.io/projected/c4a33236-11e5-4757-a143-f57fd4f5a5f4-kube-api-access-f2l84\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.284946 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2074acd6-4d68-4d93-84af-a608758fddd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.285019 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rjdb\" (UniqueName: \"kubernetes.io/projected/d908d190-3ee3-4403-90d3-8635493b7b6c-kube-api-access-2rjdb\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.450196 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h5szg" event={"ID":"ddecd476-ca49-4043-a064-b769163f4988","Type":"ContainerStarted","Data":"213d0de8612f1bb7799b5fbbfd3f3934c38c221687a61f0faa4369f0ebf2b9b8"} Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.454559 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a356e13b-39de-4d0b-aa58-f2dc6d3179fb","Type":"ContainerStarted","Data":"b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe"} Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.454742 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.456882 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-85x95" event={"ID":"c2424429-364d-4bce-b9da-32ea56eae279","Type":"ContainerDied","Data":"f04e75467cdaffc6f071290e2475c9fde10d806bb5dccf14d223bcd73ed0906f"} Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.457052 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f04e75467cdaffc6f071290e2475c9fde10d806bb5dccf14d223bcd73ed0906f" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.457198 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-85x95" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.500141 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-lgbvr" event={"ID":"c4a33236-11e5-4757-a143-f57fd4f5a5f4","Type":"ContainerDied","Data":"a91b73d42cbe7d68fed877f4b03105dad0a3bbcbcb8e8b63da55d995f5325549"} Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.500206 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a91b73d42cbe7d68fed877f4b03105dad0a3bbcbcb8e8b63da55d995f5325549" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.500170 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-lgbvr" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.502232 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b003-account-create-update-f27jp" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.502237 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b003-account-create-update-f27jp" event={"ID":"9a5eae49-48c5-4eb3-869c-1af4cea5877d","Type":"ContainerDied","Data":"191836283d5bc1bb30b5e8d79019289ac5e36d765efaaa53ebb9d1c225a11658"} Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.502447 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="191836283d5bc1bb30b5e8d79019289ac5e36d765efaaa53ebb9d1c225a11658" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.505570 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"81cd0f163cc32dc8fc12f6179ee7b38de479141b2383dffebd2e12e545326038"} Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.509199 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3cd9-account-create-update-6dwwd" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.509211 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3cd9-account-create-update-6dwwd" event={"ID":"3686d9a5-edf5-4ab9-9c9d-6547cf2c6351","Type":"ContainerDied","Data":"8fe9f30cc4fcf618fd8365ceb870a201266d8c79863f1c738ee14b1f7c6acd89"} Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.509601 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fe9f30cc4fcf618fd8365ceb870a201266d8c79863f1c738ee14b1f7c6acd89" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.517232 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ql56p" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.517256 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ql56p" event={"ID":"d908d190-3ee3-4403-90d3-8635493b7b6c","Type":"ContainerDied","Data":"e02c8f7e13485e15d1b5ebbd344de9893f5d6c229e9b702d10959e12272f0131"} Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.518080 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e02c8f7e13485e15d1b5ebbd344de9893f5d6c229e9b702d10959e12272f0131" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.519169 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-h5szg" podStartSLOduration=4.614955478 podStartE2EDuration="15.519154364s" podCreationTimestamp="2026-02-19 13:04:26 +0000 UTC" firstStartedPulling="2026-02-19 13:04:30.191866936 +0000 UTC m=+1080.587385704" lastFinishedPulling="2026-02-19 13:04:41.096065822 +0000 UTC m=+1091.491584590" observedRunningTime="2026-02-19 13:04:41.484161121 +0000 UTC m=+1091.879679879" watchObservedRunningTime="2026-02-19 13:04:41.519154364 +0000 UTC m=+1091.914673132" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.522783 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371949.332005 podStartE2EDuration="1m27.522771186s" podCreationTimestamp="2026-02-19 13:03:14 +0000 UTC" firstStartedPulling="2026-02-19 13:03:16.600563947 +0000 UTC m=+1006.996082715" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:41.520897708 +0000 UTC m=+1091.916416476" watchObservedRunningTime="2026-02-19 13:04:41.522771186 +0000 UTC m=+1091.918289954" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.523134 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c4b6-account-create-update-58cbw" event={"ID":"2074acd6-4d68-4d93-84af-a608758fddd0","Type":"ContainerDied","Data":"16b40aa009585989b14a8800a8f5790750efbad6f01cb07da3917d74bc60e629"} Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.523170 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16b40aa009585989b14a8800a8f5790750efbad6f01cb07da3917d74bc60e629" Feb 19 13:04:41 crc kubenswrapper[4833]: I0219 13:04:41.523195 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c4b6-account-create-update-58cbw" Feb 19 13:04:42 crc kubenswrapper[4833]: I0219 13:04:42.571288 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"60ee158c4a2df98a59e55aa7d75e195ffb5fce4dd53f422293e7b37d32725aa7"} Feb 19 13:04:42 crc kubenswrapper[4833]: I0219 13:04:42.571653 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"4fcff61b2d028f31111886d4f361d4311163cd55609eeec4ff754c1186d21b9c"} Feb 19 13:04:42 crc kubenswrapper[4833]: I0219 13:04:42.571668 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"463082ceaeb848264cf6c71e2fe2d7da70d81db446a3991f0b2588a307590305"} Feb 19 13:04:42 crc kubenswrapper[4833]: I0219 13:04:42.573762 4833 generic.go:334] "Generic (PLEG): container finished" podID="8ab56183-f7ec-44a3-95db-66064f67a074" containerID="534aa3b9ce24afc9a12522747b2c4a86edd9875760fb692528afb80e8cfa4e84" exitCode=0 Feb 19 13:04:42 crc kubenswrapper[4833]: I0219 13:04:42.573845 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nl4pd" event={"ID":"8ab56183-f7ec-44a3-95db-66064f67a074","Type":"ContainerDied","Data":"534aa3b9ce24afc9a12522747b2c4a86edd9875760fb692528afb80e8cfa4e84"} Feb 19 13:04:43 crc kubenswrapper[4833]: I0219 13:04:43.587336 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"d7e2e0206de48a18a29cd1ab59f54c9dabdee5687876f6c77b8dab9b1e1cb994"} Feb 19 13:04:43 crc kubenswrapper[4833]: I0219 13:04:43.973277 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.040550 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-db-sync-config-data\") pod \"8ab56183-f7ec-44a3-95db-66064f67a074\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.040857 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-config-data\") pod \"8ab56183-f7ec-44a3-95db-66064f67a074\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.040955 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6z5v\" (UniqueName: \"kubernetes.io/projected/8ab56183-f7ec-44a3-95db-66064f67a074-kube-api-access-z6z5v\") pod \"8ab56183-f7ec-44a3-95db-66064f67a074\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.040982 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-combined-ca-bundle\") pod \"8ab56183-f7ec-44a3-95db-66064f67a074\" (UID: \"8ab56183-f7ec-44a3-95db-66064f67a074\") " Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.048824 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8ab56183-f7ec-44a3-95db-66064f67a074" (UID: "8ab56183-f7ec-44a3-95db-66064f67a074"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.048873 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab56183-f7ec-44a3-95db-66064f67a074-kube-api-access-z6z5v" (OuterVolumeSpecName: "kube-api-access-z6z5v") pod "8ab56183-f7ec-44a3-95db-66064f67a074" (UID: "8ab56183-f7ec-44a3-95db-66064f67a074"). InnerVolumeSpecName "kube-api-access-z6z5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.078369 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ab56183-f7ec-44a3-95db-66064f67a074" (UID: "8ab56183-f7ec-44a3-95db-66064f67a074"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.104744 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-config-data" (OuterVolumeSpecName: "config-data") pod "8ab56183-f7ec-44a3-95db-66064f67a074" (UID: "8ab56183-f7ec-44a3-95db-66064f67a074"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.143190 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6z5v\" (UniqueName: \"kubernetes.io/projected/8ab56183-f7ec-44a3-95db-66064f67a074-kube-api-access-z6z5v\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.143237 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.143249 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.143263 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab56183-f7ec-44a3-95db-66064f67a074-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.604474 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"f50601955117ef53c028daf8fbd683d034c7027d6ed15c326b2d7c401a59102d"} Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.604542 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"abcd1a8db6ee2be4cd5c01d0169cce8b5aabb146ec25ca5e1e56fb952e7fc3da"} Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.604555 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"18259fb1787581ae4dcc3e96d7f83a4e7fc91cf7867aad76caaec02e13d8e8f0"} Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.604567 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"7dec1900b79b295cfa11319b84432b169e093e4b403364ad6edfc15ef96cee43"} Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.606249 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-nl4pd" event={"ID":"8ab56183-f7ec-44a3-95db-66064f67a074","Type":"ContainerDied","Data":"ad78f43e82da394ad4b115bd4f554604c500530adec1b729dc42400390d4ef25"} Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.606278 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad78f43e82da394ad4b115bd4f554604c500530adec1b729dc42400390d4ef25" Feb 19 13:04:44 crc kubenswrapper[4833]: I0219 13:04:44.606338 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-nl4pd" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.091407 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-v8558"] Feb 19 13:04:45 crc kubenswrapper[4833]: E0219 13:04:45.092630 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5eae49-48c5-4eb3-869c-1af4cea5877d" containerName="mariadb-account-create-update" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.092677 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5eae49-48c5-4eb3-869c-1af4cea5877d" containerName="mariadb-account-create-update" Feb 19 13:04:45 crc kubenswrapper[4833]: E0219 13:04:45.092707 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d908d190-3ee3-4403-90d3-8635493b7b6c" containerName="mariadb-database-create" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.092721 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d908d190-3ee3-4403-90d3-8635493b7b6c" containerName="mariadb-database-create" Feb 19 13:04:45 crc kubenswrapper[4833]: E0219 13:04:45.092740 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99105995-a7f9-4667-97a1-b28259252d16" containerName="ovn-config" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.092747 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="99105995-a7f9-4667-97a1-b28259252d16" containerName="ovn-config" Feb 19 13:04:45 crc kubenswrapper[4833]: E0219 13:04:45.092762 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2424429-364d-4bce-b9da-32ea56eae279" containerName="mariadb-database-create" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.092772 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2424429-364d-4bce-b9da-32ea56eae279" containerName="mariadb-database-create" Feb 19 13:04:45 crc kubenswrapper[4833]: E0219 13:04:45.092791 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2074acd6-4d68-4d93-84af-a608758fddd0" containerName="mariadb-account-create-update" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.092797 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2074acd6-4d68-4d93-84af-a608758fddd0" containerName="mariadb-account-create-update" Feb 19 13:04:45 crc kubenswrapper[4833]: E0219 13:04:45.092820 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab56183-f7ec-44a3-95db-66064f67a074" containerName="glance-db-sync" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.092825 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab56183-f7ec-44a3-95db-66064f67a074" containerName="glance-db-sync" Feb 19 13:04:45 crc kubenswrapper[4833]: E0219 13:04:45.092838 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3686d9a5-edf5-4ab9-9c9d-6547cf2c6351" containerName="mariadb-account-create-update" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.092847 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3686d9a5-edf5-4ab9-9c9d-6547cf2c6351" containerName="mariadb-account-create-update" Feb 19 13:04:45 crc kubenswrapper[4833]: E0219 13:04:45.092862 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a33236-11e5-4757-a143-f57fd4f5a5f4" containerName="mariadb-database-create" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.092871 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a33236-11e5-4757-a143-f57fd4f5a5f4" containerName="mariadb-database-create" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.093203 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a33236-11e5-4757-a143-f57fd4f5a5f4" containerName="mariadb-database-create" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.093221 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2074acd6-4d68-4d93-84af-a608758fddd0" containerName="mariadb-account-create-update" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.093232 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d908d190-3ee3-4403-90d3-8635493b7b6c" containerName="mariadb-database-create" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.093241 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2424429-364d-4bce-b9da-32ea56eae279" containerName="mariadb-database-create" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.093259 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5eae49-48c5-4eb3-869c-1af4cea5877d" containerName="mariadb-account-create-update" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.093274 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="99105995-a7f9-4667-97a1-b28259252d16" containerName="ovn-config" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.093292 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab56183-f7ec-44a3-95db-66064f67a074" containerName="glance-db-sync" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.093311 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3686d9a5-edf5-4ab9-9c9d-6547cf2c6351" containerName="mariadb-account-create-update" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.097448 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.116775 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-v8558"] Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.158299 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xb4v\" (UniqueName: \"kubernetes.io/projected/215abe99-3723-4080-87a5-dfc0ac275af9-kube-api-access-4xb4v\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.158372 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.158391 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-config\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.158425 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.158458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-dns-svc\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.259607 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.259649 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-config\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.259686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.259723 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-dns-svc\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.259787 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xb4v\" (UniqueName: \"kubernetes.io/projected/215abe99-3723-4080-87a5-dfc0ac275af9-kube-api-access-4xb4v\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.260716 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.260865 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-config\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.260941 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.261706 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-dns-svc\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.291284 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xb4v\" (UniqueName: \"kubernetes.io/projected/215abe99-3723-4080-87a5-dfc0ac275af9-kube-api-access-4xb4v\") pod \"dnsmasq-dns-74dc88fc-v8558\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.416814 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.621705 4833 generic.go:334] "Generic (PLEG): container finished" podID="ddecd476-ca49-4043-a064-b769163f4988" containerID="213d0de8612f1bb7799b5fbbfd3f3934c38c221687a61f0faa4369f0ebf2b9b8" exitCode=0 Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.621973 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h5szg" event={"ID":"ddecd476-ca49-4043-a064-b769163f4988","Type":"ContainerDied","Data":"213d0de8612f1bb7799b5fbbfd3f3934c38c221687a61f0faa4369f0ebf2b9b8"} Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.634138 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"9b8cb492aa075a418cb14bd8210d1baba936d9b3a6bfdae70a3c5413fa2917bf"} Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.634177 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"0dfc7a49-4c64-4c4c-b0a9-eea1d8734612","Type":"ContainerStarted","Data":"7faa2284a550a893fa780ac487d3da9277df644c16d79898ac727558174572bf"} Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.655053 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-v8558"] Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.703823 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=41.455210462 podStartE2EDuration="54.703806923s" podCreationTimestamp="2026-02-19 13:03:51 +0000 UTC" firstStartedPulling="2026-02-19 13:04:30.067743417 +0000 UTC m=+1080.463262185" lastFinishedPulling="2026-02-19 13:04:43.316339878 +0000 UTC m=+1093.711858646" observedRunningTime="2026-02-19 13:04:45.697713647 +0000 UTC m=+1096.093232415" watchObservedRunningTime="2026-02-19 13:04:45.703806923 +0000 UTC m=+1096.099325691" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.950817 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-v8558"] Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.985466 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-6xdtt"] Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.987052 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.988746 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 19 13:04:45 crc kubenswrapper[4833]: I0219 13:04:45.994326 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-6xdtt"] Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.074342 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.074447 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-config\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.074477 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.074544 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.074578 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2w92\" (UniqueName: \"kubernetes.io/projected/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-kube-api-access-j2w92\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.074622 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.176554 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.176634 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-config\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.176655 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.176695 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.176721 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2w92\" (UniqueName: \"kubernetes.io/projected/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-kube-api-access-j2w92\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.176754 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.177644 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.178230 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.178771 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-config\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.179276 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.179886 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.199325 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2w92\" (UniqueName: \"kubernetes.io/projected/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-kube-api-access-j2w92\") pod \"dnsmasq-dns-5f59b8f679-6xdtt\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.301209 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.645228 4833 generic.go:334] "Generic (PLEG): container finished" podID="215abe99-3723-4080-87a5-dfc0ac275af9" containerID="51f7125548701c724c0dbdf723c31070ee05fe75a7f511d8abcd2b65bc7c8302" exitCode=0 Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.645313 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-v8558" event={"ID":"215abe99-3723-4080-87a5-dfc0ac275af9","Type":"ContainerDied","Data":"51f7125548701c724c0dbdf723c31070ee05fe75a7f511d8abcd2b65bc7c8302"} Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.645975 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-v8558" event={"ID":"215abe99-3723-4080-87a5-dfc0ac275af9","Type":"ContainerStarted","Data":"4376d6268c112cb28414d1cf50693b71b592b0e013bf678daeeab921fa91e1da"} Feb 19 13:04:46 crc kubenswrapper[4833]: I0219 13:04:46.883285 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-6xdtt"] Feb 19 13:04:46 crc kubenswrapper[4833]: W0219 13:04:46.889826 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod379fba83_cf2f_4b1b_a0f1_fc42313a79b9.slice/crio-0e0351b376811cb40442ff8d937d3ef665762945a27afe12883ebff22e067c61 WatchSource:0}: Error finding container 0e0351b376811cb40442ff8d937d3ef665762945a27afe12883ebff22e067c61: Status 404 returned error can't find the container with id 0e0351b376811cb40442ff8d937d3ef665762945a27afe12883ebff22e067c61 Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.092897 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.195686 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-combined-ca-bundle\") pod \"ddecd476-ca49-4043-a064-b769163f4988\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.195933 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qls5b\" (UniqueName: \"kubernetes.io/projected/ddecd476-ca49-4043-a064-b769163f4988-kube-api-access-qls5b\") pod \"ddecd476-ca49-4043-a064-b769163f4988\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.196054 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-config-data\") pod \"ddecd476-ca49-4043-a064-b769163f4988\" (UID: \"ddecd476-ca49-4043-a064-b769163f4988\") " Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.201029 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddecd476-ca49-4043-a064-b769163f4988-kube-api-access-qls5b" (OuterVolumeSpecName: "kube-api-access-qls5b") pod "ddecd476-ca49-4043-a064-b769163f4988" (UID: "ddecd476-ca49-4043-a064-b769163f4988"). InnerVolumeSpecName "kube-api-access-qls5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.227013 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ddecd476-ca49-4043-a064-b769163f4988" (UID: "ddecd476-ca49-4043-a064-b769163f4988"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.236322 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-config-data" (OuterVolumeSpecName: "config-data") pod "ddecd476-ca49-4043-a064-b769163f4988" (UID: "ddecd476-ca49-4043-a064-b769163f4988"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.298536 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.298579 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qls5b\" (UniqueName: \"kubernetes.io/projected/ddecd476-ca49-4043-a064-b769163f4988-kube-api-access-qls5b\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.298593 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ddecd476-ca49-4043-a064-b769163f4988-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.654361 4833 generic.go:334] "Generic (PLEG): container finished" podID="379fba83-cf2f-4b1b-a0f1-fc42313a79b9" containerID="fce9909203cfc30043779a97d870931c739e2779ed28f908305f652271f4cb4f" exitCode=0 Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.654431 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" event={"ID":"379fba83-cf2f-4b1b-a0f1-fc42313a79b9","Type":"ContainerDied","Data":"fce9909203cfc30043779a97d870931c739e2779ed28f908305f652271f4cb4f"} Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.654754 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" event={"ID":"379fba83-cf2f-4b1b-a0f1-fc42313a79b9","Type":"ContainerStarted","Data":"0e0351b376811cb40442ff8d937d3ef665762945a27afe12883ebff22e067c61"} Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.657813 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-v8558" event={"ID":"215abe99-3723-4080-87a5-dfc0ac275af9","Type":"ContainerStarted","Data":"b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d"} Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.657890 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.658005 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-v8558" podUID="215abe99-3723-4080-87a5-dfc0ac275af9" containerName="dnsmasq-dns" containerID="cri-o://b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d" gracePeriod=10 Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.660427 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h5szg" event={"ID":"ddecd476-ca49-4043-a064-b769163f4988","Type":"ContainerDied","Data":"0ee37de6979ba2d05129f156126185534dc1d02e6392f5de31b7cc34f8269101"} Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.660451 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ee37de6979ba2d05129f156126185534dc1d02e6392f5de31b7cc34f8269101" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.660537 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h5szg" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.721269 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-v8558" podStartSLOduration=2.72125456 podStartE2EDuration="2.72125456s" podCreationTimestamp="2026-02-19 13:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:47.720459269 +0000 UTC m=+1098.115978037" watchObservedRunningTime="2026-02-19 13:04:47.72125456 +0000 UTC m=+1098.116773328" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.918475 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-6xdtt"] Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.959162 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-jkmxm"] Feb 19 13:04:47 crc kubenswrapper[4833]: E0219 13:04:47.959483 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddecd476-ca49-4043-a064-b769163f4988" containerName="keystone-db-sync" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.959512 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddecd476-ca49-4043-a064-b769163f4988" containerName="keystone-db-sync" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.959673 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddecd476-ca49-4043-a064-b769163f4988" containerName="keystone-db-sync" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.960534 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:47 crc kubenswrapper[4833]: I0219 13:04:47.997258 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-jkmxm"] Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.017927 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-m4tzv"] Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.024359 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-config\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.024405 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.024435 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvbmz\" (UniqueName: \"kubernetes.io/projected/d7b08485-262e-4098-b9bd-c0acdaad3185-kube-api-access-cvbmz\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.024476 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.024560 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.024583 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.028570 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.035039 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.035282 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lhmgj" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.035397 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.035935 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.036048 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.037982 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m4tzv"] Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.129703 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-credential-keys\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.129970 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.130069 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.130171 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-combined-ca-bundle\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.130257 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-config\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.130330 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-fernet-keys\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.130404 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.130483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvbmz\" (UniqueName: \"kubernetes.io/projected/d7b08485-262e-4098-b9bd-c0acdaad3185-kube-api-access-cvbmz\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.130580 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-scripts\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.130660 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-config-data\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.130738 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.130802 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx5vn\" (UniqueName: \"kubernetes.io/projected/25fd3be3-b99d-49e5-9922-d776198b0c52-kube-api-access-jx5vn\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.132090 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-config\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.133410 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.135684 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.144472 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.145302 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.164323 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvbmz\" (UniqueName: \"kubernetes.io/projected/d7b08485-262e-4098-b9bd-c0acdaad3185-kube-api-access-cvbmz\") pod \"dnsmasq-dns-bbf5cc879-jkmxm\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.181623 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7594f6fd59-hxk8t"] Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.182858 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.197666 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.197776 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.198039 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.198224 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-jh547" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.220143 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.233328 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-combined-ca-bundle\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.233374 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-fernet-keys\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.233408 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-scripts\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.233435 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-config-data\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.233461 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx5vn\" (UniqueName: \"kubernetes.io/projected/25fd3be3-b99d-49e5-9922-d776198b0c52-kube-api-access-jx5vn\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.233484 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-credential-keys\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.236642 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7594f6fd59-hxk8t"] Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.239413 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-config-data\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.239782 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-scripts\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.240710 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-combined-ca-bundle\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.243393 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-fernet-keys\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.253263 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-credential-keys\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.270082 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx5vn\" (UniqueName: \"kubernetes.io/projected/25fd3be3-b99d-49e5-9922-d776198b0c52-kube-api-access-jx5vn\") pod \"keystone-bootstrap-m4tzv\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.285951 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.338916 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-nb\") pod \"215abe99-3723-4080-87a5-dfc0ac275af9\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.339148 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xb4v\" (UniqueName: \"kubernetes.io/projected/215abe99-3723-4080-87a5-dfc0ac275af9-kube-api-access-4xb4v\") pod \"215abe99-3723-4080-87a5-dfc0ac275af9\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.339178 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-dns-svc\") pod \"215abe99-3723-4080-87a5-dfc0ac275af9\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.339199 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-config\") pod \"215abe99-3723-4080-87a5-dfc0ac275af9\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.339222 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-sb\") pod \"215abe99-3723-4080-87a5-dfc0ac275af9\" (UID: \"215abe99-3723-4080-87a5-dfc0ac275af9\") " Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.339432 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q26n\" (UniqueName: \"kubernetes.io/projected/a6ff486b-931e-4973-9bac-5d68a07e9991-kube-api-access-6q26n\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.339465 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6ff486b-931e-4973-9bac-5d68a07e9991-horizon-secret-key\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.339485 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-config-data\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.339588 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-scripts\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.339607 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6ff486b-931e-4973-9bac-5d68a07e9991-logs\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.342576 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215abe99-3723-4080-87a5-dfc0ac275af9-kube-api-access-4xb4v" (OuterVolumeSpecName: "kube-api-access-4xb4v") pod "215abe99-3723-4080-87a5-dfc0ac275af9" (UID: "215abe99-3723-4080-87a5-dfc0ac275af9"). InnerVolumeSpecName "kube-api-access-4xb4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.368341 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.442444 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-scripts\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.442522 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6ff486b-931e-4973-9bac-5d68a07e9991-logs\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.442576 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q26n\" (UniqueName: \"kubernetes.io/projected/a6ff486b-931e-4973-9bac-5d68a07e9991-kube-api-access-6q26n\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.442604 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6ff486b-931e-4973-9bac-5d68a07e9991-horizon-secret-key\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.442623 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-config-data\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.442729 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xb4v\" (UniqueName: \"kubernetes.io/projected/215abe99-3723-4080-87a5-dfc0ac275af9-kube-api-access-4xb4v\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.443575 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6ff486b-931e-4973-9bac-5d68a07e9991-logs\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.443882 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-config-data\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.444250 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-scripts\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.494860 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6ff486b-931e-4973-9bac-5d68a07e9991-horizon-secret-key\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.517710 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q26n\" (UniqueName: \"kubernetes.io/projected/a6ff486b-931e-4973-9bac-5d68a07e9991-kube-api-access-6q26n\") pod \"horizon-7594f6fd59-hxk8t\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:48 crc kubenswrapper[4833]: I0219 13:04:48.521738 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.688217 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "215abe99-3723-4080-87a5-dfc0ac275af9" (UID: "215abe99-3723-4080-87a5-dfc0ac275af9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.702948 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "215abe99-3723-4080-87a5-dfc0ac275af9" (UID: "215abe99-3723-4080-87a5-dfc0ac275af9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.768390 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" podUID="379fba83-cf2f-4b1b-a0f1-fc42313a79b9" containerName="dnsmasq-dns" containerID="cri-o://9b7a0edff4b23e5075bda639e2d0d09d456b682633eede98fe8a308be6fc6bd7" gracePeriod=10 Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.779200 4833 generic.go:334] "Generic (PLEG): container finished" podID="215abe99-3723-4080-87a5-dfc0ac275af9" containerID="b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d" exitCode=0 Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.779299 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-v8558" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.786051 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-dsz68"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.789117 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.791205 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:49 crc kubenswrapper[4833]: E0219 13:04:48.789657 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215abe99-3723-4080-87a5-dfc0ac275af9" containerName="init" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.792346 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="215abe99-3723-4080-87a5-dfc0ac275af9" containerName="init" Feb 19 13:04:49 crc kubenswrapper[4833]: E0219 13:04:48.792410 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215abe99-3723-4080-87a5-dfc0ac275af9" containerName="dnsmasq-dns" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.792418 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="215abe99-3723-4080-87a5-dfc0ac275af9" containerName="dnsmasq-dns" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.792982 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "215abe99-3723-4080-87a5-dfc0ac275af9" (UID: "215abe99-3723-4080-87a5-dfc0ac275af9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.793078 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="215abe99-3723-4080-87a5-dfc0ac275af9" containerName="dnsmasq-dns" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.794071 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d7d877d97-8jn6g"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.794987 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.797089 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dsz68"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.797120 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.797132 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.798927 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.799145 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.799433 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-dcbpv" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.802281 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.804597 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" event={"ID":"379fba83-cf2f-4b1b-a0f1-fc42313a79b9","Type":"ContainerStarted","Data":"9b7a0edff4b23e5075bda639e2d0d09d456b682633eede98fe8a308be6fc6bd7"} Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.804634 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-hnxz9"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.824731 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-config" (OuterVolumeSpecName: "config") pod "215abe99-3723-4080-87a5-dfc0ac275af9" (UID: "215abe99-3723-4080-87a5-dfc0ac275af9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.806227 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.832929 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.833037 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.839114 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d7d877d97-8jn6g"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.839188 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-v8558" event={"ID":"215abe99-3723-4080-87a5-dfc0ac275af9","Type":"ContainerDied","Data":"b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d"} Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.839236 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.839271 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-m6ckw"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.839319 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.839558 4833 scope.go:117] "RemoveContainer" containerID="b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.847148 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.847198 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tl7wh" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.852419 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.852326 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-v8558" event={"ID":"215abe99-3723-4080-87a5-dfc0ac275af9","Type":"ContainerDied","Data":"4376d6268c112cb28414d1cf50693b71b592b0e013bf678daeeab921fa91e1da"} Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.852902 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-m6ckw"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.882563 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.882740 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nh77q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.882916 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.887730 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hnxz9"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.897896 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-config-data\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.897939 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af233fd1-f013-4662-a320-68b9af5c43f2-logs\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.897960 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-scripts\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.898005 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-db-sync-config-data\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.898057 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-config-data\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.898078 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2902e7f1-6f1b-4b67-a9fa-fd031a961900-etc-machine-id\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.898114 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-scripts\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.898132 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/af233fd1-f013-4662-a320-68b9af5c43f2-horizon-secret-key\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.898202 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-combined-ca-bundle\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.898321 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bjm8\" (UniqueName: \"kubernetes.io/projected/2902e7f1-6f1b-4b67-a9fa-fd031a961900-kube-api-access-9bjm8\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.898349 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmr6m\" (UniqueName: \"kubernetes.io/projected/af233fd1-f013-4662-a320-68b9af5c43f2-kube-api-access-mmr6m\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.898443 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.898454 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/215abe99-3723-4080-87a5-dfc0ac275af9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.905791 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-htb5q"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.907118 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.910638 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.918854 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.919149 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vxzrm" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.930520 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-htb5q"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.939938 4833 scope.go:117] "RemoveContainer" containerID="51f7125548701c724c0dbdf723c31070ee05fe75a7f511d8abcd2b65bc7c8302" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:48.978680 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-jkmxm"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002424 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-combined-ca-bundle\") pod \"barbican-db-sync-hnxz9\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002480 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-scripts\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002529 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sc72\" (UniqueName: \"kubernetes.io/projected/e40f6228-f038-4dc4-9180-f399b9a8c30b-kube-api-access-2sc72\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002553 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-scripts\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002580 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-db-sync-config-data\") pod \"barbican-db-sync-hnxz9\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002599 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lngh\" (UniqueName: \"kubernetes.io/projected/2ef3a268-01cc-4ba4-b7cc-628bb6328271-kube-api-access-7lngh\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002622 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e40f6228-f038-4dc4-9180-f399b9a8c30b-logs\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002644 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-run-httpd\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002668 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002702 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bjm8\" (UniqueName: \"kubernetes.io/projected/2902e7f1-6f1b-4b67-a9fa-fd031a961900-kube-api-access-9bjm8\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002727 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-log-httpd\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002752 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmr6m\" (UniqueName: \"kubernetes.io/projected/af233fd1-f013-4662-a320-68b9af5c43f2-kube-api-access-mmr6m\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002780 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-config-data\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002801 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-combined-ca-bundle\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002820 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002850 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj742\" (UniqueName: \"kubernetes.io/projected/8211e149-f236-498d-bc79-183c39d9d62e-kube-api-access-gj742\") pod \"neutron-db-sync-m6ckw\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002884 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-combined-ca-bundle\") pod \"neutron-db-sync-m6ckw\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002908 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-config-data\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002933 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af233fd1-f013-4662-a320-68b9af5c43f2-logs\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002953 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-scripts\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.002983 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-db-sync-config-data\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.003020 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-config-data\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.003040 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2902e7f1-6f1b-4b67-a9fa-fd031a961900-etc-machine-id\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.003062 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-config\") pod \"neutron-db-sync-m6ckw\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.003085 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-scripts\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.003106 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/af233fd1-f013-4662-a320-68b9af5c43f2-horizon-secret-key\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.003134 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-config-data\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.003170 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-combined-ca-bundle\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.003194 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtf5\" (UniqueName: \"kubernetes.io/projected/d5aed427-a4af-40b6-bd9c-10284e0935ce-kube-api-access-fhtf5\") pod \"barbican-db-sync-hnxz9\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.006900 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2902e7f1-6f1b-4b67-a9fa-fd031a961900-etc-machine-id\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.007247 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af233fd1-f013-4662-a320-68b9af5c43f2-logs\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.008985 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-config-data\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.009276 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-scripts\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.044460 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.047163 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-combined-ca-bundle\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.048641 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-db-sync-config-data\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.053380 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.055704 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-config-data\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.071139 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-scripts\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.071474 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/af233fd1-f013-4662-a320-68b9af5c43f2-horizon-secret-key\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.072426 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.072670 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.072904 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6dfnb" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.073051 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.076890 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bjm8\" (UniqueName: \"kubernetes.io/projected/2902e7f1-6f1b-4b67-a9fa-fd031a961900-kube-api-access-9bjm8\") pod \"cinder-db-sync-dsz68\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.088116 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmr6m\" (UniqueName: \"kubernetes.io/projected/af233fd1-f013-4662-a320-68b9af5c43f2-kube-api-access-mmr6m\") pod \"horizon-7d7d877d97-8jn6g\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.094996 4833 scope.go:117] "RemoveContainer" containerID="b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.102304 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.114811 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-config\") pod \"neutron-db-sync-m6ckw\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.114882 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-config-data\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.114920 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtf5\" (UniqueName: \"kubernetes.io/projected/d5aed427-a4af-40b6-bd9c-10284e0935ce-kube-api-access-fhtf5\") pod \"barbican-db-sync-hnxz9\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.114952 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-combined-ca-bundle\") pod \"barbican-db-sync-hnxz9\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.114979 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-scripts\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115000 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sc72\" (UniqueName: \"kubernetes.io/projected/e40f6228-f038-4dc4-9180-f399b9a8c30b-kube-api-access-2sc72\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115021 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-scripts\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115045 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-db-sync-config-data\") pod \"barbican-db-sync-hnxz9\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115066 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lngh\" (UniqueName: \"kubernetes.io/projected/2ef3a268-01cc-4ba4-b7cc-628bb6328271-kube-api-access-7lngh\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115094 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e40f6228-f038-4dc4-9180-f399b9a8c30b-logs\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115114 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-run-httpd\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115142 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115170 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-log-httpd\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115198 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-config-data\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115220 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-combined-ca-bundle\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115238 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115269 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj742\" (UniqueName: \"kubernetes.io/projected/8211e149-f236-498d-bc79-183c39d9d62e-kube-api-access-gj742\") pod \"neutron-db-sync-m6ckw\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.115290 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-combined-ca-bundle\") pod \"neutron-db-sync-m6ckw\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:04:49 crc kubenswrapper[4833]: E0219 13:04:49.118766 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d\": container with ID starting with b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d not found: ID does not exist" containerID="b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.118838 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d"} err="failed to get container status \"b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d\": rpc error: code = NotFound desc = could not find container \"b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d\": container with ID starting with b1ec74a156e2b89be94f8627b7a363de1a4829fba643d26d7f7026c7d30b7f0d not found: ID does not exist" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.118874 4833 scope.go:117] "RemoveContainer" containerID="51f7125548701c724c0dbdf723c31070ee05fe75a7f511d8abcd2b65bc7c8302" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.119991 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-log-httpd\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.120547 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e40f6228-f038-4dc4-9180-f399b9a8c30b-logs\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.122090 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-run-httpd\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: E0219 13:04:49.124818 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51f7125548701c724c0dbdf723c31070ee05fe75a7f511d8abcd2b65bc7c8302\": container with ID starting with 51f7125548701c724c0dbdf723c31070ee05fe75a7f511d8abcd2b65bc7c8302 not found: ID does not exist" containerID="51f7125548701c724c0dbdf723c31070ee05fe75a7f511d8abcd2b65bc7c8302" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.125605 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51f7125548701c724c0dbdf723c31070ee05fe75a7f511d8abcd2b65bc7c8302"} err="failed to get container status \"51f7125548701c724c0dbdf723c31070ee05fe75a7f511d8abcd2b65bc7c8302\": rpc error: code = NotFound desc = could not find container \"51f7125548701c724c0dbdf723c31070ee05fe75a7f511d8abcd2b65bc7c8302\": container with ID starting with 51f7125548701c724c0dbdf723c31070ee05fe75a7f511d8abcd2b65bc7c8302 not found: ID does not exist" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.130322 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-db-sync-config-data\") pod \"barbican-db-sync-hnxz9\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.134644 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-scripts\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.139294 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-config-data\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.139403 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-config\") pod \"neutron-db-sync-m6ckw\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.139834 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-combined-ca-bundle\") pod \"barbican-db-sync-hnxz9\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.140254 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.140359 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-combined-ca-bundle\") pod \"neutron-db-sync-m6ckw\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.141161 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.146279 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-combined-ca-bundle\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.152550 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-scripts\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.153172 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj742\" (UniqueName: \"kubernetes.io/projected/8211e149-f236-498d-bc79-183c39d9d62e-kube-api-access-gj742\") pod \"neutron-db-sync-m6ckw\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.153413 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-config-data\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.158836 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lngh\" (UniqueName: \"kubernetes.io/projected/2ef3a268-01cc-4ba4-b7cc-628bb6328271-kube-api-access-7lngh\") pod \"ceilometer-0\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.162150 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtf5\" (UniqueName: \"kubernetes.io/projected/d5aed427-a4af-40b6-bd9c-10284e0935ce-kube-api-access-fhtf5\") pod \"barbican-db-sync-hnxz9\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.166548 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-djbdf"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.167928 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.174928 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sc72\" (UniqueName: \"kubernetes.io/projected/e40f6228-f038-4dc4-9180-f399b9a8c30b-kube-api-access-2sc72\") pod \"placement-db-sync-htb5q\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.219177 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.219213 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-logs\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.219251 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.219289 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8nwz\" (UniqueName: \"kubernetes.io/projected/0085e416-340f-4ebb-b0fb-7501606bddf5-kube-api-access-t8nwz\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.219329 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.219355 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.219398 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.219439 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.227446 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dsz68" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.233553 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-djbdf"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.246792 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.250531 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.251962 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.264414 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.264457 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.272043 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.279781 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" podStartSLOduration=4.2797594 podStartE2EDuration="4.2797594s" podCreationTimestamp="2026-02-19 13:04:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:48.834832081 +0000 UTC m=+1099.230350859" watchObservedRunningTime="2026-02-19 13:04:49.2797594 +0000 UTC m=+1099.675278168" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.282394 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.310840 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321011 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8nwz\" (UniqueName: \"kubernetes.io/projected/0085e416-340f-4ebb-b0fb-7501606bddf5-kube-api-access-t8nwz\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321067 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66znt\" (UniqueName: \"kubernetes.io/projected/ca786541-c266-41b5-a91d-3d626d530b45-kube-api-access-66znt\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321115 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321153 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321209 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321265 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321295 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321332 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321365 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321400 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321471 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321519 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-logs\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321552 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-config\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.321587 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.328997 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.329239 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-logs\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.329481 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.330418 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.343216 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.348140 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.357138 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8nwz\" (UniqueName: \"kubernetes.io/projected/0085e416-340f-4ebb-b0fb-7501606bddf5-kube-api-access-t8nwz\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.372832 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htb5q" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.374018 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.375477 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.423782 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.423973 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.423992 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n4lw\" (UniqueName: \"kubernetes.io/projected/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-kube-api-access-4n4lw\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.424017 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.424044 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.424059 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-logs\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.424075 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.424091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.424143 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.424168 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.424185 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-config\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.424213 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.424239 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66znt\" (UniqueName: \"kubernetes.io/projected/ca786541-c266-41b5-a91d-3d626d530b45-kube-api-access-66znt\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.424290 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.425395 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.430021 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.432001 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.432069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.432596 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.433364 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-config\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.480990 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66znt\" (UniqueName: \"kubernetes.io/projected/ca786541-c266-41b5-a91d-3d626d530b45-kube-api-access-66znt\") pod \"dnsmasq-dns-56df8fb6b7-djbdf\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.486880 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.525135 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.525180 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.525211 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.525260 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.525303 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.525319 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.525339 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n4lw\" (UniqueName: \"kubernetes.io/projected/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-kube-api-access-4n4lw\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.525365 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-logs\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.525899 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-logs\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.526294 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.526431 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.527333 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.535252 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.540641 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-v8558"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.547143 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-scripts\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.548063 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-v8558"] Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.548925 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.551120 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-config-data\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.572952 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n4lw\" (UniqueName: \"kubernetes.io/projected/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-kube-api-access-4n4lw\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.580702 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.821473 4833 generic.go:334] "Generic (PLEG): container finished" podID="379fba83-cf2f-4b1b-a0f1-fc42313a79b9" containerID="9b7a0edff4b23e5075bda639e2d0d09d456b682633eede98fe8a308be6fc6bd7" exitCode=0 Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.821992 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" event={"ID":"379fba83-cf2f-4b1b-a0f1-fc42313a79b9","Type":"ContainerDied","Data":"9b7a0edff4b23e5075bda639e2d0d09d456b682633eede98fe8a308be6fc6bd7"} Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.867120 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.872704 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.934177 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-nb\") pod \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.934221 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-swift-storage-0\") pod \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.934264 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-sb\") pod \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.934415 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-config\") pod \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.934512 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-svc\") pod \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.934528 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2w92\" (UniqueName: \"kubernetes.io/projected/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-kube-api-access-j2w92\") pod \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\" (UID: \"379fba83-cf2f-4b1b-a0f1-fc42313a79b9\") " Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.946954 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-kube-api-access-j2w92" (OuterVolumeSpecName: "kube-api-access-j2w92") pod "379fba83-cf2f-4b1b-a0f1-fc42313a79b9" (UID: "379fba83-cf2f-4b1b-a0f1-fc42313a79b9"). InnerVolumeSpecName "kube-api-access-j2w92". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.997986 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "379fba83-cf2f-4b1b-a0f1-fc42313a79b9" (UID: "379fba83-cf2f-4b1b-a0f1-fc42313a79b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:49 crc kubenswrapper[4833]: I0219 13:04:49.999780 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-config" (OuterVolumeSpecName: "config") pod "379fba83-cf2f-4b1b-a0f1-fc42313a79b9" (UID: "379fba83-cf2f-4b1b-a0f1-fc42313a79b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.015312 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "379fba83-cf2f-4b1b-a0f1-fc42313a79b9" (UID: "379fba83-cf2f-4b1b-a0f1-fc42313a79b9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.019245 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "379fba83-cf2f-4b1b-a0f1-fc42313a79b9" (UID: "379fba83-cf2f-4b1b-a0f1-fc42313a79b9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.036723 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.036752 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.036764 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2w92\" (UniqueName: \"kubernetes.io/projected/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-kube-api-access-j2w92\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.036773 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.036781 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.038083 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "379fba83-cf2f-4b1b-a0f1-fc42313a79b9" (UID: "379fba83-cf2f-4b1b-a0f1-fc42313a79b9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.113306 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7594f6fd59-hxk8t"] Feb 19 13:04:50 crc kubenswrapper[4833]: W0219 13:04:50.129149 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25fd3be3_b99d_49e5_9922_d776198b0c52.slice/crio-79b95869fe1e36d20ca026c8784ba8e1c78d6cdadac698fa419a884e175307ce WatchSource:0}: Error finding container 79b95869fe1e36d20ca026c8784ba8e1c78d6cdadac698fa419a884e175307ce: Status 404 returned error can't find the container with id 79b95869fe1e36d20ca026c8784ba8e1c78d6cdadac698fa419a884e175307ce Feb 19 13:04:50 crc kubenswrapper[4833]: W0219 13:04:50.134298 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7b08485_262e_4098_b9bd_c0acdaad3185.slice/crio-6d1e7cb1eb6904a87f8ae1b3a55d7f184e691c5d700592d8a7131e7ec8e379f6 WatchSource:0}: Error finding container 6d1e7cb1eb6904a87f8ae1b3a55d7f184e691c5d700592d8a7131e7ec8e379f6: Status 404 returned error can't find the container with id 6d1e7cb1eb6904a87f8ae1b3a55d7f184e691c5d700592d8a7131e7ec8e379f6 Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.138803 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/379fba83-cf2f-4b1b-a0f1-fc42313a79b9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.142627 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m4tzv"] Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.159343 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-jkmxm"] Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.263206 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-hnxz9"] Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.270632 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d7d877d97-8jn6g"] Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.279123 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.285371 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dsz68"] Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.344267 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215abe99-3723-4080-87a5-dfc0ac275af9" path="/var/lib/kubelet/pods/215abe99-3723-4080-87a5-dfc0ac275af9/volumes" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.480723 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-djbdf"] Feb 19 13:04:50 crc kubenswrapper[4833]: W0219 13:04:50.485893 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca786541_c266_41b5_a91d_3d626d530b45.slice/crio-0efb3353c1de49eca92c89a1e6fdac180547740ed2732551b5f7176f0c1fbafe WatchSource:0}: Error finding container 0efb3353c1de49eca92c89a1e6fdac180547740ed2732551b5f7176f0c1fbafe: Status 404 returned error can't find the container with id 0efb3353c1de49eca92c89a1e6fdac180547740ed2732551b5f7176f0c1fbafe Feb 19 13:04:50 crc kubenswrapper[4833]: W0219 13:04:50.489739 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8211e149_f236_498d_bc79_183c39d9d62e.slice/crio-d92d961248aae8f7f767e6ae56eb25c8264184535b49e28e77a3bad1f57ba024 WatchSource:0}: Error finding container d92d961248aae8f7f767e6ae56eb25c8264184535b49e28e77a3bad1f57ba024: Status 404 returned error can't find the container with id d92d961248aae8f7f767e6ae56eb25c8264184535b49e28e77a3bad1f57ba024 Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.499202 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-m6ckw"] Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.521399 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-htb5q"] Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.623398 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:04:50 crc kubenswrapper[4833]: W0219 13:04:50.646040 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0085e416_340f_4ebb_b0fb_7501606bddf5.slice/crio-389a52e6065b10e3ec3145721fb2900ebebc51e23cb7bb8e8fc79f4963945962 WatchSource:0}: Error finding container 389a52e6065b10e3ec3145721fb2900ebebc51e23cb7bb8e8fc79f4963945962: Status 404 returned error can't find the container with id 389a52e6065b10e3ec3145721fb2900ebebc51e23cb7bb8e8fc79f4963945962 Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.728614 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:04:50 crc kubenswrapper[4833]: W0219 13:04:50.760422 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c807e2e_0b6c_47ee_abaa_5bdb6e157a1c.slice/crio-17158a9baff7b285b449f6592537af3e9a9c927a2a42f753060b986d1a09cc58 WatchSource:0}: Error finding container 17158a9baff7b285b449f6592537af3e9a9c927a2a42f753060b986d1a09cc58: Status 404 returned error can't find the container with id 17158a9baff7b285b449f6592537af3e9a9c927a2a42f753060b986d1a09cc58 Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.855158 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hnxz9" event={"ID":"d5aed427-a4af-40b6-bd9c-10284e0935ce","Type":"ContainerStarted","Data":"bcf7a4a5d4ff805093ad794ec8cd1f12a74d138f0fdb5168ecb9511e1da34332"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.892330 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htb5q" event={"ID":"e40f6228-f038-4dc4-9180-f399b9a8c30b","Type":"ContainerStarted","Data":"8ca3faed5cd89bceafb79b5ed3b2503df056c64eb30a2150feb3731e91390a5a"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.903919 4833 generic.go:334] "Generic (PLEG): container finished" podID="d7b08485-262e-4098-b9bd-c0acdaad3185" containerID="6e6f9f7ba8f7c99e7e4ad7149173a18a176ae2a1a66b95b98a993e02958b4cdc" exitCode=0 Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.904034 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" event={"ID":"d7b08485-262e-4098-b9bd-c0acdaad3185","Type":"ContainerDied","Data":"6e6f9f7ba8f7c99e7e4ad7149173a18a176ae2a1a66b95b98a993e02958b4cdc"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.904063 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" event={"ID":"d7b08485-262e-4098-b9bd-c0acdaad3185","Type":"ContainerStarted","Data":"6d1e7cb1eb6904a87f8ae1b3a55d7f184e691c5d700592d8a7131e7ec8e379f6"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.913430 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m4tzv" event={"ID":"25fd3be3-b99d-49e5-9922-d776198b0c52","Type":"ContainerStarted","Data":"44111b677fe64a1499e00cd8977512b8520ed595020ab2f0a9dc238288b49854"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.913470 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m4tzv" event={"ID":"25fd3be3-b99d-49e5-9922-d776198b0c52","Type":"ContainerStarted","Data":"79b95869fe1e36d20ca026c8784ba8e1c78d6cdadac698fa419a884e175307ce"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.915659 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m6ckw" event={"ID":"8211e149-f236-498d-bc79-183c39d9d62e","Type":"ContainerStarted","Data":"d528054cc919c0f793501e938ac43720b97595b6f5a4ce47acdc323d668237ab"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.915679 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m6ckw" event={"ID":"8211e149-f236-498d-bc79-183c39d9d62e","Type":"ContainerStarted","Data":"d92d961248aae8f7f767e6ae56eb25c8264184535b49e28e77a3bad1f57ba024"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.917285 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef3a268-01cc-4ba4-b7cc-628bb6328271","Type":"ContainerStarted","Data":"a3751e65c5cf86eba6f25c03cd28dc301d24a46311c5ab242db64a37ac46ac1e"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.929622 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dsz68" event={"ID":"2902e7f1-6f1b-4b67-a9fa-fd031a961900","Type":"ContainerStarted","Data":"1ec4cbf7581a959707028b2706c3ca38c637a925059150fd741b33a78020f7b9"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.935538 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" event={"ID":"379fba83-cf2f-4b1b-a0f1-fc42313a79b9","Type":"ContainerDied","Data":"0e0351b376811cb40442ff8d937d3ef665762945a27afe12883ebff22e067c61"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.935590 4833 scope.go:117] "RemoveContainer" containerID="9b7a0edff4b23e5075bda639e2d0d09d456b682633eede98fe8a308be6fc6bd7" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.935823 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-6xdtt" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.938879 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-m6ckw" podStartSLOduration=2.938864019 podStartE2EDuration="2.938864019s" podCreationTimestamp="2026-02-19 13:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:50.933265366 +0000 UTC m=+1101.328784144" watchObservedRunningTime="2026-02-19 13:04:50.938864019 +0000 UTC m=+1101.334382787" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.941877 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0085e416-340f-4ebb-b0fb-7501606bddf5","Type":"ContainerStarted","Data":"389a52e6065b10e3ec3145721fb2900ebebc51e23cb7bb8e8fc79f4963945962"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.944749 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7594f6fd59-hxk8t" event={"ID":"a6ff486b-931e-4973-9bac-5d68a07e9991","Type":"ContainerStarted","Data":"f6a94f6a7dae47ac1a91b4c62249b344f7a8676064d9c7b66dad989a6b188235"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.946868 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c","Type":"ContainerStarted","Data":"17158a9baff7b285b449f6592537af3e9a9c927a2a42f753060b986d1a09cc58"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.960799 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-m4tzv" podStartSLOduration=3.9607649780000003 podStartE2EDuration="3.960764978s" podCreationTimestamp="2026-02-19 13:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:50.951738657 +0000 UTC m=+1101.347257445" watchObservedRunningTime="2026-02-19 13:04:50.960764978 +0000 UTC m=+1101.356283746" Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.963183 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d7d877d97-8jn6g" event={"ID":"af233fd1-f013-4662-a320-68b9af5c43f2","Type":"ContainerStarted","Data":"0d31b4cded23a8433463ecde498ebbe3b7f07895939dd3006cdf9c5c0429662c"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.967220 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" event={"ID":"ca786541-c266-41b5-a91d-3d626d530b45","Type":"ContainerStarted","Data":"0efb3353c1de49eca92c89a1e6fdac180547740ed2732551b5f7176f0c1fbafe"} Feb 19 13:04:50 crc kubenswrapper[4833]: I0219 13:04:50.999945 4833 scope.go:117] "RemoveContainer" containerID="fce9909203cfc30043779a97d870931c739e2779ed28f908305f652271f4cb4f" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.001829 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-6xdtt"] Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.015082 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-6xdtt"] Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.261705 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.410096 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-svc\") pod \"d7b08485-262e-4098-b9bd-c0acdaad3185\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.410418 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-swift-storage-0\") pod \"d7b08485-262e-4098-b9bd-c0acdaad3185\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.410442 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-nb\") pod \"d7b08485-262e-4098-b9bd-c0acdaad3185\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.410519 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-sb\") pod \"d7b08485-262e-4098-b9bd-c0acdaad3185\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.410539 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-config\") pod \"d7b08485-262e-4098-b9bd-c0acdaad3185\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.410583 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvbmz\" (UniqueName: \"kubernetes.io/projected/d7b08485-262e-4098-b9bd-c0acdaad3185-kube-api-access-cvbmz\") pod \"d7b08485-262e-4098-b9bd-c0acdaad3185\" (UID: \"d7b08485-262e-4098-b9bd-c0acdaad3185\") " Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.438196 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b08485-262e-4098-b9bd-c0acdaad3185-kube-api-access-cvbmz" (OuterVolumeSpecName: "kube-api-access-cvbmz") pod "d7b08485-262e-4098-b9bd-c0acdaad3185" (UID: "d7b08485-262e-4098-b9bd-c0acdaad3185"). InnerVolumeSpecName "kube-api-access-cvbmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.513154 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvbmz\" (UniqueName: \"kubernetes.io/projected/d7b08485-262e-4098-b9bd-c0acdaad3185-kube-api-access-cvbmz\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.567419 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d7b08485-262e-4098-b9bd-c0acdaad3185" (UID: "d7b08485-262e-4098-b9bd-c0acdaad3185"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.571111 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-config" (OuterVolumeSpecName: "config") pod "d7b08485-262e-4098-b9bd-c0acdaad3185" (UID: "d7b08485-262e-4098-b9bd-c0acdaad3185"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.578131 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d7b08485-262e-4098-b9bd-c0acdaad3185" (UID: "d7b08485-262e-4098-b9bd-c0acdaad3185"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.592187 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d7b08485-262e-4098-b9bd-c0acdaad3185" (UID: "d7b08485-262e-4098-b9bd-c0acdaad3185"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.618716 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.618758 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.618772 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.618783 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.620762 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d7b08485-262e-4098-b9bd-c0acdaad3185" (UID: "d7b08485-262e-4098-b9bd-c0acdaad3185"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:04:51 crc kubenswrapper[4833]: I0219 13:04:51.720858 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d7b08485-262e-4098-b9bd-c0acdaad3185-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.004212 4833 generic.go:334] "Generic (PLEG): container finished" podID="ca786541-c266-41b5-a91d-3d626d530b45" containerID="666cc9745dbaf145bec303408c021028637b9a80c48a345aed03f200846fe07a" exitCode=0 Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.004392 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" event={"ID":"ca786541-c266-41b5-a91d-3d626d530b45","Type":"ContainerStarted","Data":"a799a1f97eb8d698cce065c431af26bed30ba6d576f4e5cb99b0c7e556da4cdb"} Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.004447 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" event={"ID":"ca786541-c266-41b5-a91d-3d626d530b45","Type":"ContainerDied","Data":"666cc9745dbaf145bec303408c021028637b9a80c48a345aed03f200846fe07a"} Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.004535 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.035060 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0085e416-340f-4ebb-b0fb-7501606bddf5","Type":"ContainerStarted","Data":"f0ea19e5ccec74858066a08c0bd277ba2b730c243e547c329302ad3b1c8e3cf7"} Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.039740 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" podStartSLOduration=4.039724535 podStartE2EDuration="4.039724535s" podCreationTimestamp="2026-02-19 13:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:52.037480678 +0000 UTC m=+1102.432999446" watchObservedRunningTime="2026-02-19 13:04:52.039724535 +0000 UTC m=+1102.435243303" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.069279 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.069532 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-jkmxm" event={"ID":"d7b08485-262e-4098-b9bd-c0acdaad3185","Type":"ContainerDied","Data":"6d1e7cb1eb6904a87f8ae1b3a55d7f184e691c5d700592d8a7131e7ec8e379f6"} Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.069587 4833 scope.go:117] "RemoveContainer" containerID="6e6f9f7ba8f7c99e7e4ad7149173a18a176ae2a1a66b95b98a993e02958b4cdc" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.146551 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-jkmxm"] Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.157350 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-jkmxm"] Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.339810 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="379fba83-cf2f-4b1b-a0f1-fc42313a79b9" path="/var/lib/kubelet/pods/379fba83-cf2f-4b1b-a0f1-fc42313a79b9/volumes" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.340349 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b08485-262e-4098-b9bd-c0acdaad3185" path="/var/lib/kubelet/pods/d7b08485-262e-4098-b9bd-c0acdaad3185/volumes" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.498487 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.535325 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.602596 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7594f6fd59-hxk8t"] Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.632195 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.686618 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c6c854b45-nzb7k"] Feb 19 13:04:52 crc kubenswrapper[4833]: E0219 13:04:52.687146 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379fba83-cf2f-4b1b-a0f1-fc42313a79b9" containerName="init" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.687162 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="379fba83-cf2f-4b1b-a0f1-fc42313a79b9" containerName="init" Feb 19 13:04:52 crc kubenswrapper[4833]: E0219 13:04:52.687189 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b08485-262e-4098-b9bd-c0acdaad3185" containerName="init" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.687195 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b08485-262e-4098-b9bd-c0acdaad3185" containerName="init" Feb 19 13:04:52 crc kubenswrapper[4833]: E0219 13:04:52.687217 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379fba83-cf2f-4b1b-a0f1-fc42313a79b9" containerName="dnsmasq-dns" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.687222 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="379fba83-cf2f-4b1b-a0f1-fc42313a79b9" containerName="dnsmasq-dns" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.687511 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="379fba83-cf2f-4b1b-a0f1-fc42313a79b9" containerName="dnsmasq-dns" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.687519 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b08485-262e-4098-b9bd-c0acdaad3185" containerName="init" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.689269 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.704113 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c6c854b45-nzb7k"] Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.756797 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-scripts\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.756870 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-config-data\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.756951 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742dbb25-3782-45e5-931a-185f8a98d24d-logs\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.756970 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px2tp\" (UniqueName: \"kubernetes.io/projected/742dbb25-3782-45e5-931a-185f8a98d24d-kube-api-access-px2tp\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.757012 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/742dbb25-3782-45e5-931a-185f8a98d24d-horizon-secret-key\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.858838 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-scripts\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.858909 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-config-data\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.858992 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742dbb25-3782-45e5-931a-185f8a98d24d-logs\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.859011 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px2tp\" (UniqueName: \"kubernetes.io/projected/742dbb25-3782-45e5-931a-185f8a98d24d-kube-api-access-px2tp\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.859053 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/742dbb25-3782-45e5-931a-185f8a98d24d-horizon-secret-key\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.859540 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742dbb25-3782-45e5-931a-185f8a98d24d-logs\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.859720 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-scripts\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.860212 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-config-data\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.874729 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/742dbb25-3782-45e5-931a-185f8a98d24d-horizon-secret-key\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:52 crc kubenswrapper[4833]: I0219 13:04:52.876086 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px2tp\" (UniqueName: \"kubernetes.io/projected/742dbb25-3782-45e5-931a-185f8a98d24d-kube-api-access-px2tp\") pod \"horizon-6c6c854b45-nzb7k\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:53 crc kubenswrapper[4833]: I0219 13:04:53.033239 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:04:53 crc kubenswrapper[4833]: I0219 13:04:53.090683 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c","Type":"ContainerStarted","Data":"9a3ec05011b2902c352d5a888e934db730aebe5a7ab9ee353894745f43259329"} Feb 19 13:04:53 crc kubenswrapper[4833]: I0219 13:04:53.573758 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c6c854b45-nzb7k"] Feb 19 13:04:53 crc kubenswrapper[4833]: W0219 13:04:53.597768 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod742dbb25_3782_45e5_931a_185f8a98d24d.slice/crio-e93bf89ff3c35e233076a83615765c98494191cc9181a35f1ada92b7b1e4959c WatchSource:0}: Error finding container e93bf89ff3c35e233076a83615765c98494191cc9181a35f1ada92b7b1e4959c: Status 404 returned error can't find the container with id e93bf89ff3c35e233076a83615765c98494191cc9181a35f1ada92b7b1e4959c Feb 19 13:04:54 crc kubenswrapper[4833]: I0219 13:04:54.109284 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c","Type":"ContainerStarted","Data":"1c54ba5489ce6e6e858ca98d378902c6effb53ffc0816deab135d69b894b6e2b"} Feb 19 13:04:54 crc kubenswrapper[4833]: I0219 13:04:54.109814 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" containerName="glance-log" containerID="cri-o://9a3ec05011b2902c352d5a888e934db730aebe5a7ab9ee353894745f43259329" gracePeriod=30 Feb 19 13:04:54 crc kubenswrapper[4833]: I0219 13:04:54.111149 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" containerName="glance-httpd" containerID="cri-o://1c54ba5489ce6e6e858ca98d378902c6effb53ffc0816deab135d69b894b6e2b" gracePeriod=30 Feb 19 13:04:54 crc kubenswrapper[4833]: I0219 13:04:54.117794 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c6c854b45-nzb7k" event={"ID":"742dbb25-3782-45e5-931a-185f8a98d24d","Type":"ContainerStarted","Data":"e93bf89ff3c35e233076a83615765c98494191cc9181a35f1ada92b7b1e4959c"} Feb 19 13:04:54 crc kubenswrapper[4833]: I0219 13:04:54.121629 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0085e416-340f-4ebb-b0fb-7501606bddf5","Type":"ContainerStarted","Data":"1cf7acab124ef1f76de517a331a16b2be4edc97ff380d739a2f88c93c375782c"} Feb 19 13:04:54 crc kubenswrapper[4833]: I0219 13:04:54.121756 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0085e416-340f-4ebb-b0fb-7501606bddf5" containerName="glance-log" containerID="cri-o://f0ea19e5ccec74858066a08c0bd277ba2b730c243e547c329302ad3b1c8e3cf7" gracePeriod=30 Feb 19 13:04:54 crc kubenswrapper[4833]: I0219 13:04:54.121782 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0085e416-340f-4ebb-b0fb-7501606bddf5" containerName="glance-httpd" containerID="cri-o://1cf7acab124ef1f76de517a331a16b2be4edc97ff380d739a2f88c93c375782c" gracePeriod=30 Feb 19 13:04:54 crc kubenswrapper[4833]: I0219 13:04:54.140220 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.140197037 podStartE2EDuration="6.140197037s" podCreationTimestamp="2026-02-19 13:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:54.137154282 +0000 UTC m=+1104.532673050" watchObservedRunningTime="2026-02-19 13:04:54.140197037 +0000 UTC m=+1104.535715805" Feb 19 13:04:54 crc kubenswrapper[4833]: I0219 13:04:54.164892 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.164875892 podStartE2EDuration="6.164875892s" podCreationTimestamp="2026-02-19 13:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:04:54.153304831 +0000 UTC m=+1104.548823599" watchObservedRunningTime="2026-02-19 13:04:54.164875892 +0000 UTC m=+1104.560394660" Feb 19 13:04:55 crc kubenswrapper[4833]: I0219 13:04:55.136750 4833 generic.go:334] "Generic (PLEG): container finished" podID="0085e416-340f-4ebb-b0fb-7501606bddf5" containerID="1cf7acab124ef1f76de517a331a16b2be4edc97ff380d739a2f88c93c375782c" exitCode=0 Feb 19 13:04:55 crc kubenswrapper[4833]: I0219 13:04:55.137278 4833 generic.go:334] "Generic (PLEG): container finished" podID="0085e416-340f-4ebb-b0fb-7501606bddf5" containerID="f0ea19e5ccec74858066a08c0bd277ba2b730c243e547c329302ad3b1c8e3cf7" exitCode=143 Feb 19 13:04:55 crc kubenswrapper[4833]: I0219 13:04:55.136939 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0085e416-340f-4ebb-b0fb-7501606bddf5","Type":"ContainerDied","Data":"1cf7acab124ef1f76de517a331a16b2be4edc97ff380d739a2f88c93c375782c"} Feb 19 13:04:55 crc kubenswrapper[4833]: I0219 13:04:55.137346 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0085e416-340f-4ebb-b0fb-7501606bddf5","Type":"ContainerDied","Data":"f0ea19e5ccec74858066a08c0bd277ba2b730c243e547c329302ad3b1c8e3cf7"} Feb 19 13:04:55 crc kubenswrapper[4833]: I0219 13:04:55.142025 4833 generic.go:334] "Generic (PLEG): container finished" podID="3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" containerID="1c54ba5489ce6e6e858ca98d378902c6effb53ffc0816deab135d69b894b6e2b" exitCode=0 Feb 19 13:04:55 crc kubenswrapper[4833]: I0219 13:04:55.142076 4833 generic.go:334] "Generic (PLEG): container finished" podID="3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" containerID="9a3ec05011b2902c352d5a888e934db730aebe5a7ab9ee353894745f43259329" exitCode=143 Feb 19 13:04:55 crc kubenswrapper[4833]: I0219 13:04:55.142096 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c","Type":"ContainerDied","Data":"1c54ba5489ce6e6e858ca98d378902c6effb53ffc0816deab135d69b894b6e2b"} Feb 19 13:04:55 crc kubenswrapper[4833]: I0219 13:04:55.142269 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c","Type":"ContainerDied","Data":"9a3ec05011b2902c352d5a888e934db730aebe5a7ab9ee353894745f43259329"} Feb 19 13:04:56 crc kubenswrapper[4833]: I0219 13:04:56.084724 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:04:56 crc kubenswrapper[4833]: I0219 13:04:56.167350 4833 generic.go:334] "Generic (PLEG): container finished" podID="25fd3be3-b99d-49e5-9922-d776198b0c52" containerID="44111b677fe64a1499e00cd8977512b8520ed595020ab2f0a9dc238288b49854" exitCode=0 Feb 19 13:04:56 crc kubenswrapper[4833]: I0219 13:04:56.167402 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m4tzv" event={"ID":"25fd3be3-b99d-49e5-9922-d776198b0c52","Type":"ContainerDied","Data":"44111b677fe64a1499e00cd8977512b8520ed595020ab2f0a9dc238288b49854"} Feb 19 13:04:57 crc kubenswrapper[4833]: I0219 13:04:57.982935 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d7d877d97-8jn6g"] Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.006536 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5bc667ffbb-qqgnx"] Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.007802 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.009718 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.025617 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bc667ffbb-qqgnx"] Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.071378 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c6c854b45-nzb7k"] Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.098200 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b954444d4-2mwt9"] Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.099473 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.125662 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b954444d4-2mwt9"] Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.183653 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfe4179-53a2-4a74-9045-7a498c9aad70-logs\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.183722 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcjt9\" (UniqueName: \"kubernetes.io/projected/bbfe4179-53a2-4a74-9045-7a498c9aad70-kube-api-access-fcjt9\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.184182 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-scripts\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.184315 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-config-data\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.184582 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-secret-key\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.184729 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-tls-certs\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.185194 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-combined-ca-bundle\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.286438 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88341f77-7fab-4dba-be1d-8e11becd2953-scripts\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.286523 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqxt\" (UniqueName: \"kubernetes.io/projected/88341f77-7fab-4dba-be1d-8e11becd2953-kube-api-access-hdqxt\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.286685 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88341f77-7fab-4dba-be1d-8e11becd2953-combined-ca-bundle\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.286721 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-tls-certs\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.286745 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88341f77-7fab-4dba-be1d-8e11becd2953-logs\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.286774 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-combined-ca-bundle\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.286827 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88341f77-7fab-4dba-be1d-8e11becd2953-config-data\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.286866 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/88341f77-7fab-4dba-be1d-8e11becd2953-horizon-tls-certs\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.287094 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfe4179-53a2-4a74-9045-7a498c9aad70-logs\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.287162 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcjt9\" (UniqueName: \"kubernetes.io/projected/bbfe4179-53a2-4a74-9045-7a498c9aad70-kube-api-access-fcjt9\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.287192 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-scripts\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.287228 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/88341f77-7fab-4dba-be1d-8e11becd2953-horizon-secret-key\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.287276 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-config-data\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.287305 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-secret-key\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.287476 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfe4179-53a2-4a74-9045-7a498c9aad70-logs\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.288070 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-scripts\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.288904 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-config-data\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.293542 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-secret-key\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.293634 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-tls-certs\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.293930 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-combined-ca-bundle\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.304595 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcjt9\" (UniqueName: \"kubernetes.io/projected/bbfe4179-53a2-4a74-9045-7a498c9aad70-kube-api-access-fcjt9\") pod \"horizon-5bc667ffbb-qqgnx\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.328794 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.389608 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/88341f77-7fab-4dba-be1d-8e11becd2953-horizon-secret-key\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.389877 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88341f77-7fab-4dba-be1d-8e11becd2953-scripts\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.390896 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqxt\" (UniqueName: \"kubernetes.io/projected/88341f77-7fab-4dba-be1d-8e11becd2953-kube-api-access-hdqxt\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.390946 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88341f77-7fab-4dba-be1d-8e11becd2953-combined-ca-bundle\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.390973 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88341f77-7fab-4dba-be1d-8e11becd2953-logs\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.391176 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88341f77-7fab-4dba-be1d-8e11becd2953-scripts\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.393932 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88341f77-7fab-4dba-be1d-8e11becd2953-config-data\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.393996 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/88341f77-7fab-4dba-be1d-8e11becd2953-horizon-tls-certs\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.394195 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/88341f77-7fab-4dba-be1d-8e11becd2953-horizon-secret-key\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.394420 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.395224 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88341f77-7fab-4dba-be1d-8e11becd2953-config-data\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.396887 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88341f77-7fab-4dba-be1d-8e11becd2953-logs\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.397992 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88341f77-7fab-4dba-be1d-8e11becd2953-combined-ca-bundle\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.404129 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/88341f77-7fab-4dba-be1d-8e11becd2953-horizon-tls-certs\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.412028 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqxt\" (UniqueName: \"kubernetes.io/projected/88341f77-7fab-4dba-be1d-8e11becd2953-kube-api-access-hdqxt\") pod \"horizon-7b954444d4-2mwt9\" (UID: \"88341f77-7fab-4dba-be1d-8e11becd2953\") " pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.416817 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.485648 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495291 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-scripts\") pod \"0085e416-340f-4ebb-b0fb-7501606bddf5\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495351 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-combined-ca-bundle\") pod \"0085e416-340f-4ebb-b0fb-7501606bddf5\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495425 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"0085e416-340f-4ebb-b0fb-7501606bddf5\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495474 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-httpd-run\") pod \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495514 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-scripts\") pod \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495538 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-logs\") pod \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495564 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-config-data\") pod \"0085e416-340f-4ebb-b0fb-7501606bddf5\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495581 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-config-data\") pod \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495604 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495623 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-internal-tls-certs\") pod \"0085e416-340f-4ebb-b0fb-7501606bddf5\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495640 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-httpd-run\") pod \"0085e416-340f-4ebb-b0fb-7501606bddf5\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495692 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n4lw\" (UniqueName: \"kubernetes.io/projected/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-kube-api-access-4n4lw\") pod \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495744 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8nwz\" (UniqueName: \"kubernetes.io/projected/0085e416-340f-4ebb-b0fb-7501606bddf5-kube-api-access-t8nwz\") pod \"0085e416-340f-4ebb-b0fb-7501606bddf5\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495769 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-public-tls-certs\") pod \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495787 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-logs\") pod \"0085e416-340f-4ebb-b0fb-7501606bddf5\" (UID: \"0085e416-340f-4ebb-b0fb-7501606bddf5\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.495815 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-combined-ca-bundle\") pod \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\" (UID: \"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.496932 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-logs" (OuterVolumeSpecName: "logs") pod "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" (UID: "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.497175 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-logs" (OuterVolumeSpecName: "logs") pod "0085e416-340f-4ebb-b0fb-7501606bddf5" (UID: "0085e416-340f-4ebb-b0fb-7501606bddf5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.497470 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0085e416-340f-4ebb-b0fb-7501606bddf5" (UID: "0085e416-340f-4ebb-b0fb-7501606bddf5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.497717 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" (UID: "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.500979 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-kube-api-access-4n4lw" (OuterVolumeSpecName: "kube-api-access-4n4lw") pod "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" (UID: "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c"). InnerVolumeSpecName "kube-api-access-4n4lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.501296 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "0085e416-340f-4ebb-b0fb-7501606bddf5" (UID: "0085e416-340f-4ebb-b0fb-7501606bddf5"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.502264 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-scripts" (OuterVolumeSpecName: "scripts") pod "0085e416-340f-4ebb-b0fb-7501606bddf5" (UID: "0085e416-340f-4ebb-b0fb-7501606bddf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.503774 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0085e416-340f-4ebb-b0fb-7501606bddf5-kube-api-access-t8nwz" (OuterVolumeSpecName: "kube-api-access-t8nwz") pod "0085e416-340f-4ebb-b0fb-7501606bddf5" (UID: "0085e416-340f-4ebb-b0fb-7501606bddf5"). InnerVolumeSpecName "kube-api-access-t8nwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.507833 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-scripts" (OuterVolumeSpecName: "scripts") pod "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" (UID: "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.515334 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.560844 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" (UID: "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.562325 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" (UID: "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.571833 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-config-data" (OuterVolumeSpecName: "config-data") pod "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" (UID: "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.575919 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-config-data" (OuterVolumeSpecName: "config-data") pod "0085e416-340f-4ebb-b0fb-7501606bddf5" (UID: "0085e416-340f-4ebb-b0fb-7501606bddf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.585108 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" (UID: "3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.588052 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0085e416-340f-4ebb-b0fb-7501606bddf5" (UID: "0085e416-340f-4ebb-b0fb-7501606bddf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.594992 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0085e416-340f-4ebb-b0fb-7501606bddf5" (UID: "0085e416-340f-4ebb-b0fb-7501606bddf5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.596847 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-scripts\") pod \"25fd3be3-b99d-49e5-9922-d776198b0c52\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.596908 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx5vn\" (UniqueName: \"kubernetes.io/projected/25fd3be3-b99d-49e5-9922-d776198b0c52-kube-api-access-jx5vn\") pod \"25fd3be3-b99d-49e5-9922-d776198b0c52\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.596955 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-combined-ca-bundle\") pod \"25fd3be3-b99d-49e5-9922-d776198b0c52\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.596981 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-credential-keys\") pod \"25fd3be3-b99d-49e5-9922-d776198b0c52\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597026 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-fernet-keys\") pod \"25fd3be3-b99d-49e5-9922-d776198b0c52\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597073 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-config-data\") pod \"25fd3be3-b99d-49e5-9922-d776198b0c52\" (UID: \"25fd3be3-b99d-49e5-9922-d776198b0c52\") " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597333 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597348 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597373 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597403 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597411 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597419 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597427 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597435 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597447 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597456 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0085e416-340f-4ebb-b0fb-7501606bddf5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597464 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597477 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n4lw\" (UniqueName: \"kubernetes.io/projected/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-kube-api-access-4n4lw\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597487 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8nwz\" (UniqueName: \"kubernetes.io/projected/0085e416-340f-4ebb-b0fb-7501606bddf5-kube-api-access-t8nwz\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597515 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597525 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0085e416-340f-4ebb-b0fb-7501606bddf5-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.597534 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.599739 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-scripts" (OuterVolumeSpecName: "scripts") pod "25fd3be3-b99d-49e5-9922-d776198b0c52" (UID: "25fd3be3-b99d-49e5-9922-d776198b0c52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.602702 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "25fd3be3-b99d-49e5-9922-d776198b0c52" (UID: "25fd3be3-b99d-49e5-9922-d776198b0c52"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.603565 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25fd3be3-b99d-49e5-9922-d776198b0c52-kube-api-access-jx5vn" (OuterVolumeSpecName: "kube-api-access-jx5vn") pod "25fd3be3-b99d-49e5-9922-d776198b0c52" (UID: "25fd3be3-b99d-49e5-9922-d776198b0c52"). InnerVolumeSpecName "kube-api-access-jx5vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.607816 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "25fd3be3-b99d-49e5-9922-d776198b0c52" (UID: "25fd3be3-b99d-49e5-9922-d776198b0c52"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.621364 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.621544 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.622850 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25fd3be3-b99d-49e5-9922-d776198b0c52" (UID: "25fd3be3-b99d-49e5-9922-d776198b0c52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.626754 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-config-data" (OuterVolumeSpecName: "config-data") pod "25fd3be3-b99d-49e5-9922-d776198b0c52" (UID: "25fd3be3-b99d-49e5-9922-d776198b0c52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.699280 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.699324 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.699336 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.699348 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.699360 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx5vn\" (UniqueName: \"kubernetes.io/projected/25fd3be3-b99d-49e5-9922-d776198b0c52-kube-api-access-jx5vn\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.699375 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.699386 4833 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/25fd3be3-b99d-49e5-9922-d776198b0c52-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:58 crc kubenswrapper[4833]: I0219 13:04:58.699396 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.204092 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c","Type":"ContainerDied","Data":"17158a9baff7b285b449f6592537af3e9a9c927a2a42f753060b986d1a09cc58"} Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.204176 4833 scope.go:117] "RemoveContainer" containerID="1c54ba5489ce6e6e858ca98d378902c6effb53ffc0816deab135d69b894b6e2b" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.204112 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.209144 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.209129 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0085e416-340f-4ebb-b0fb-7501606bddf5","Type":"ContainerDied","Data":"389a52e6065b10e3ec3145721fb2900ebebc51e23cb7bb8e8fc79f4963945962"} Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.216230 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m4tzv" event={"ID":"25fd3be3-b99d-49e5-9922-d776198b0c52","Type":"ContainerDied","Data":"79b95869fe1e36d20ca026c8784ba8e1c78d6cdadac698fa419a884e175307ce"} Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.216261 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b95869fe1e36d20ca026c8784ba8e1c78d6cdadac698fa419a884e175307ce" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.216316 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m4tzv" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.270549 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.279626 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.285524 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.292560 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.345205 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:04:59 crc kubenswrapper[4833]: E0219 13:04:59.345808 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0085e416-340f-4ebb-b0fb-7501606bddf5" containerName="glance-log" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.345825 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0085e416-340f-4ebb-b0fb-7501606bddf5" containerName="glance-log" Feb 19 13:04:59 crc kubenswrapper[4833]: E0219 13:04:59.345839 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" containerName="glance-httpd" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.345846 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" containerName="glance-httpd" Feb 19 13:04:59 crc kubenswrapper[4833]: E0219 13:04:59.345906 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0085e416-340f-4ebb-b0fb-7501606bddf5" containerName="glance-httpd" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.345914 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0085e416-340f-4ebb-b0fb-7501606bddf5" containerName="glance-httpd" Feb 19 13:04:59 crc kubenswrapper[4833]: E0219 13:04:59.345924 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25fd3be3-b99d-49e5-9922-d776198b0c52" containerName="keystone-bootstrap" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.345930 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="25fd3be3-b99d-49e5-9922-d776198b0c52" containerName="keystone-bootstrap" Feb 19 13:04:59 crc kubenswrapper[4833]: E0219 13:04:59.345942 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" containerName="glance-log" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.345947 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" containerName="glance-log" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.346117 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" containerName="glance-log" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.346141 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0085e416-340f-4ebb-b0fb-7501606bddf5" containerName="glance-log" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.346153 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0085e416-340f-4ebb-b0fb-7501606bddf5" containerName="glance-httpd" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.346160 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="25fd3be3-b99d-49e5-9922-d776198b0c52" containerName="keystone-bootstrap" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.346170 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" containerName="glance-httpd" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.347316 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.349112 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.349452 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.349587 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.350879 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6dfnb" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.351730 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.353215 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.358119 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.360422 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.361446 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.366819 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.518625 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.518687 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-logs\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.518778 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-logs\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.518807 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-config-data\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.518881 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.518908 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.518963 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.518997 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlt2z\" (UniqueName: \"kubernetes.io/projected/da1e5208-5817-401e-bfbb-22088b43b335-kube-api-access-zlt2z\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.519077 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.519114 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.519142 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.519161 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.519178 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-scripts\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.519199 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.519214 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt665\" (UniqueName: \"kubernetes.io/projected/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-kube-api-access-rt665\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.519231 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.529455 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.606163 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-p9qjz"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.606378 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" podUID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerName="dnsmasq-dns" containerID="cri-o://f23405dc86fd18cbe13542d135453395cd350e7aa83fb011a01515eb8eb63398" gracePeriod=10 Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.620905 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.620948 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.620984 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621007 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlt2z\" (UniqueName: \"kubernetes.io/projected/da1e5208-5817-401e-bfbb-22088b43b335-kube-api-access-zlt2z\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621025 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621062 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621078 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621115 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-scripts\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621138 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621157 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt665\" (UniqueName: \"kubernetes.io/projected/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-kube-api-access-rt665\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621177 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621211 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621227 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-logs\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621248 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-logs\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.621265 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-config-data\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.625463 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.625691 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.627070 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-logs\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.627428 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-logs\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.627791 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.629151 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.630788 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.630907 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.632171 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-scripts\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.634798 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-scripts\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.642838 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.644442 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-config-data\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.644929 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlt2z\" (UniqueName: \"kubernetes.io/projected/da1e5208-5817-401e-bfbb-22088b43b335-kube-api-access-zlt2z\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.646691 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-config-data\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.647370 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.651202 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt665\" (UniqueName: \"kubernetes.io/projected/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-kube-api-access-rt665\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.680421 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.711587 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.713194 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-m4tzv"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.748652 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-m4tzv"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.765040 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " pod="openstack/glance-default-external-api-0" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.794727 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hjsb7"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.797430 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.800707 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lhmgj" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.800903 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.802279 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.802485 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.806469 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hjsb7"] Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.820546 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.888603 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-fernet-keys\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.888649 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-config-data\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.888693 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97c7g\" (UniqueName: \"kubernetes.io/projected/0bf42233-f79a-4f59-9db4-2aab3744b616-kube-api-access-97c7g\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.888729 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-combined-ca-bundle\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.888776 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-credential-keys\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.888810 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-scripts\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.990974 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-fernet-keys\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.991009 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-config-data\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.991051 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97c7g\" (UniqueName: \"kubernetes.io/projected/0bf42233-f79a-4f59-9db4-2aab3744b616-kube-api-access-97c7g\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.991811 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-combined-ca-bundle\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.991973 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-credential-keys\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.992032 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-scripts\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.995792 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-fernet-keys\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.996263 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-credential-keys\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.996941 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-combined-ca-bundle\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.996985 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-config-data\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:04:59 crc kubenswrapper[4833]: I0219 13:04:59.997164 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-scripts\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:05:00 crc kubenswrapper[4833]: I0219 13:05:00.006544 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:05:00 crc kubenswrapper[4833]: I0219 13:05:00.009783 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97c7g\" (UniqueName: \"kubernetes.io/projected/0bf42233-f79a-4f59-9db4-2aab3744b616-kube-api-access-97c7g\") pod \"keystone-bootstrap-hjsb7\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:05:00 crc kubenswrapper[4833]: I0219 13:05:00.128208 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:05:00 crc kubenswrapper[4833]: I0219 13:05:00.238413 4833 generic.go:334] "Generic (PLEG): container finished" podID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerID="f23405dc86fd18cbe13542d135453395cd350e7aa83fb011a01515eb8eb63398" exitCode=0 Feb 19 13:05:00 crc kubenswrapper[4833]: I0219 13:05:00.238454 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" event={"ID":"5e56bcf5-cac7-4e98-a3b0-43430ecf891e","Type":"ContainerDied","Data":"f23405dc86fd18cbe13542d135453395cd350e7aa83fb011a01515eb8eb63398"} Feb 19 13:05:00 crc kubenswrapper[4833]: I0219 13:05:00.327782 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0085e416-340f-4ebb-b0fb-7501606bddf5" path="/var/lib/kubelet/pods/0085e416-340f-4ebb-b0fb-7501606bddf5/volumes" Feb 19 13:05:00 crc kubenswrapper[4833]: I0219 13:05:00.328753 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25fd3be3-b99d-49e5-9922-d776198b0c52" path="/var/lib/kubelet/pods/25fd3be3-b99d-49e5-9922-d776198b0c52/volumes" Feb 19 13:05:00 crc kubenswrapper[4833]: I0219 13:05:00.329288 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c" path="/var/lib/kubelet/pods/3c807e2e-0b6c-47ee-abaa-5bdb6e157a1c/volumes" Feb 19 13:05:01 crc kubenswrapper[4833]: I0219 13:05:01.356305 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" podUID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Feb 19 13:05:05 crc kubenswrapper[4833]: E0219 13:05:05.084915 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 19 13:05:05 crc kubenswrapper[4833]: E0219 13:05:05.085400 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nb7h5dfh96hb4h9dh64dh58ch67fhdch5cch5dbh575hb7h6hb8h5bdh64bh655h5b6hf5h675h567h68fh56ch64h646h5b9hd9h564h8dh6h697q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lngh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2ef3a268-01cc-4ba4-b7cc-628bb6328271): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:05:06 crc kubenswrapper[4833]: I0219 13:05:06.355853 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" podUID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Feb 19 13:05:09 crc kubenswrapper[4833]: E0219 13:05:09.232697 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 19 13:05:09 crc kubenswrapper[4833]: E0219 13:05:09.233470 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2sc72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-htb5q_openstack(e40f6228-f038-4dc4-9180-f399b9a8c30b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:05:09 crc kubenswrapper[4833]: E0219 13:05:09.236040 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-htb5q" podUID="e40f6228-f038-4dc4-9180-f399b9a8c30b" Feb 19 13:05:09 crc kubenswrapper[4833]: E0219 13:05:09.244086 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 19 13:05:09 crc kubenswrapper[4833]: E0219 13:05:09.244379 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67ch654h588h64bhc8hdfh59fh8dh547h99h5f6h94h667h56bh5bbh5d4h547h68bh54fh64ch57bh66fh5dfh589h545h54ch668h68h64bhffh5bbhc6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmr6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7d7d877d97-8jn6g_openstack(af233fd1-f013-4662-a320-68b9af5c43f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:05:09 crc kubenswrapper[4833]: I0219 13:05:09.250196 4833 scope.go:117] "RemoveContainer" containerID="9a3ec05011b2902c352d5a888e934db730aebe5a7ab9ee353894745f43259329" Feb 19 13:05:09 crc kubenswrapper[4833]: E0219 13:05:09.250181 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7d7d877d97-8jn6g" podUID="af233fd1-f013-4662-a320-68b9af5c43f2" Feb 19 13:05:09 crc kubenswrapper[4833]: E0219 13:05:09.358595 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-htb5q" podUID="e40f6228-f038-4dc4-9180-f399b9a8c30b" Feb 19 13:05:16 crc kubenswrapper[4833]: I0219 13:05:16.355550 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" podUID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Feb 19 13:05:16 crc kubenswrapper[4833]: I0219 13:05:16.356387 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:05:16 crc kubenswrapper[4833]: E0219 13:05:16.666290 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 19 13:05:16 crc kubenswrapper[4833]: E0219 13:05:16.666462 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhtf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-hnxz9_openstack(d5aed427-a4af-40b6-bd9c-10284e0935ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:05:16 crc kubenswrapper[4833]: E0219 13:05:16.667629 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-hnxz9" podUID="d5aed427-a4af-40b6-bd9c-10284e0935ce" Feb 19 13:05:16 crc kubenswrapper[4833]: I0219 13:05:16.929611 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:05:16 crc kubenswrapper[4833]: I0219 13:05:16.938612 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.067609 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-dns-svc\") pod \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.067756 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/af233fd1-f013-4662-a320-68b9af5c43f2-horizon-secret-key\") pod \"af233fd1-f013-4662-a320-68b9af5c43f2\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.067793 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-nb\") pod \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.067815 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af233fd1-f013-4662-a320-68b9af5c43f2-logs\") pod \"af233fd1-f013-4662-a320-68b9af5c43f2\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.068035 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmr6m\" (UniqueName: \"kubernetes.io/projected/af233fd1-f013-4662-a320-68b9af5c43f2-kube-api-access-mmr6m\") pod \"af233fd1-f013-4662-a320-68b9af5c43f2\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.068181 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-config\") pod \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.068220 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t95bd\" (UniqueName: \"kubernetes.io/projected/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-kube-api-access-t95bd\") pod \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.068269 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-config-data\") pod \"af233fd1-f013-4662-a320-68b9af5c43f2\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.068296 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-scripts\") pod \"af233fd1-f013-4662-a320-68b9af5c43f2\" (UID: \"af233fd1-f013-4662-a320-68b9af5c43f2\") " Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.068336 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-sb\") pod \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\" (UID: \"5e56bcf5-cac7-4e98-a3b0-43430ecf891e\") " Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.068862 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af233fd1-f013-4662-a320-68b9af5c43f2-logs" (OuterVolumeSpecName: "logs") pod "af233fd1-f013-4662-a320-68b9af5c43f2" (UID: "af233fd1-f013-4662-a320-68b9af5c43f2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.069235 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-scripts" (OuterVolumeSpecName: "scripts") pod "af233fd1-f013-4662-a320-68b9af5c43f2" (UID: "af233fd1-f013-4662-a320-68b9af5c43f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.069268 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-config-data" (OuterVolumeSpecName: "config-data") pod "af233fd1-f013-4662-a320-68b9af5c43f2" (UID: "af233fd1-f013-4662-a320-68b9af5c43f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.076840 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af233fd1-f013-4662-a320-68b9af5c43f2-kube-api-access-mmr6m" (OuterVolumeSpecName: "kube-api-access-mmr6m") pod "af233fd1-f013-4662-a320-68b9af5c43f2" (UID: "af233fd1-f013-4662-a320-68b9af5c43f2"). InnerVolumeSpecName "kube-api-access-mmr6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.078110 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-kube-api-access-t95bd" (OuterVolumeSpecName: "kube-api-access-t95bd") pod "5e56bcf5-cac7-4e98-a3b0-43430ecf891e" (UID: "5e56bcf5-cac7-4e98-a3b0-43430ecf891e"). InnerVolumeSpecName "kube-api-access-t95bd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.082590 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af233fd1-f013-4662-a320-68b9af5c43f2-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "af233fd1-f013-4662-a320-68b9af5c43f2" (UID: "af233fd1-f013-4662-a320-68b9af5c43f2"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.111965 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-config" (OuterVolumeSpecName: "config") pod "5e56bcf5-cac7-4e98-a3b0-43430ecf891e" (UID: "5e56bcf5-cac7-4e98-a3b0-43430ecf891e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.117982 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e56bcf5-cac7-4e98-a3b0-43430ecf891e" (UID: "5e56bcf5-cac7-4e98-a3b0-43430ecf891e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.121930 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e56bcf5-cac7-4e98-a3b0-43430ecf891e" (UID: "5e56bcf5-cac7-4e98-a3b0-43430ecf891e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.126290 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e56bcf5-cac7-4e98-a3b0-43430ecf891e" (UID: "5e56bcf5-cac7-4e98-a3b0-43430ecf891e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.170141 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/af233fd1-f013-4662-a320-68b9af5c43f2-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.170187 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.170213 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af233fd1-f013-4662-a320-68b9af5c43f2-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.170223 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmr6m\" (UniqueName: \"kubernetes.io/projected/af233fd1-f013-4662-a320-68b9af5c43f2-kube-api-access-mmr6m\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.170235 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.170246 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t95bd\" (UniqueName: \"kubernetes.io/projected/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-kube-api-access-t95bd\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.170254 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.170264 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/af233fd1-f013-4662-a320-68b9af5c43f2-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.170271 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.170278 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e56bcf5-cac7-4e98-a3b0-43430ecf891e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.419126 4833 generic.go:334] "Generic (PLEG): container finished" podID="8211e149-f236-498d-bc79-183c39d9d62e" containerID="d528054cc919c0f793501e938ac43720b97595b6f5a4ce47acdc323d668237ab" exitCode=0 Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.419222 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m6ckw" event={"ID":"8211e149-f236-498d-bc79-183c39d9d62e","Type":"ContainerDied","Data":"d528054cc919c0f793501e938ac43720b97595b6f5a4ce47acdc323d668237ab"} Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.420607 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d7d877d97-8jn6g" event={"ID":"af233fd1-f013-4662-a320-68b9af5c43f2","Type":"ContainerDied","Data":"0d31b4cded23a8433463ecde498ebbe3b7f07895939dd3006cdf9c5c0429662c"} Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.420763 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d7d877d97-8jn6g" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.423618 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.423824 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" event={"ID":"5e56bcf5-cac7-4e98-a3b0-43430ecf891e","Type":"ContainerDied","Data":"2354d65209b98a594726e38f3192fa08b5cc904bd06cea48af78af6f9a362b48"} Feb 19 13:05:17 crc kubenswrapper[4833]: E0219 13:05:17.425728 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-hnxz9" podUID="d5aed427-a4af-40b6-bd9c-10284e0935ce" Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.513472 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d7d877d97-8jn6g"] Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.520275 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d7d877d97-8jn6g"] Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.526844 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-p9qjz"] Feb 19 13:05:17 crc kubenswrapper[4833]: I0219 13:05:17.533201 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-p9qjz"] Feb 19 13:05:18 crc kubenswrapper[4833]: E0219 13:05:18.268694 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 19 13:05:18 crc kubenswrapper[4833]: E0219 13:05:18.268948 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bjm8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-dsz68_openstack(2902e7f1-6f1b-4b67-a9fa-fd031a961900): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:05:18 crc kubenswrapper[4833]: E0219 13:05:18.270193 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-dsz68" podUID="2902e7f1-6f1b-4b67-a9fa-fd031a961900" Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.296732 4833 scope.go:117] "RemoveContainer" containerID="1cf7acab124ef1f76de517a331a16b2be4edc97ff380d739a2f88c93c375782c" Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.326027 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" path="/var/lib/kubelet/pods/5e56bcf5-cac7-4e98-a3b0-43430ecf891e/volumes" Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.327087 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af233fd1-f013-4662-a320-68b9af5c43f2" path="/var/lib/kubelet/pods/af233fd1-f013-4662-a320-68b9af5c43f2/volumes" Feb 19 13:05:18 crc kubenswrapper[4833]: E0219 13:05:18.443541 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-dsz68" podUID="2902e7f1-6f1b-4b67-a9fa-fd031a961900" Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.557938 4833 scope.go:117] "RemoveContainer" containerID="f0ea19e5ccec74858066a08c0bd277ba2b730c243e547c329302ad3b1c8e3cf7" Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.619470 4833 scope.go:117] "RemoveContainer" containerID="f23405dc86fd18cbe13542d135453395cd350e7aa83fb011a01515eb8eb63398" Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.654087 4833 scope.go:117] "RemoveContainer" containerID="9aa49bddb3daac39826569d79f66291b50c69594938b275b9fc365c1fd6f2944" Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.860361 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.883665 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:05:18 crc kubenswrapper[4833]: W0219 13:05:18.885065 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda1e5208_5817_401e_bfbb_22088b43b335.slice/crio-3861702f02b0ffcdfc8bcda55dd24c5496d69f67743b92ab2b2f57746892751d WatchSource:0}: Error finding container 3861702f02b0ffcdfc8bcda55dd24c5496d69f67743b92ab2b2f57746892751d: Status 404 returned error can't find the container with id 3861702f02b0ffcdfc8bcda55dd24c5496d69f67743b92ab2b2f57746892751d Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.906320 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-combined-ca-bundle\") pod \"8211e149-f236-498d-bc79-183c39d9d62e\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.906369 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-config\") pod \"8211e149-f236-498d-bc79-183c39d9d62e\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.906394 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj742\" (UniqueName: \"kubernetes.io/projected/8211e149-f236-498d-bc79-183c39d9d62e-kube-api-access-gj742\") pod \"8211e149-f236-498d-bc79-183c39d9d62e\" (UID: \"8211e149-f236-498d-bc79-183c39d9d62e\") " Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.915679 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8211e149-f236-498d-bc79-183c39d9d62e-kube-api-access-gj742" (OuterVolumeSpecName: "kube-api-access-gj742") pod "8211e149-f236-498d-bc79-183c39d9d62e" (UID: "8211e149-f236-498d-bc79-183c39d9d62e"). InnerVolumeSpecName "kube-api-access-gj742". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.941706 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8211e149-f236-498d-bc79-183c39d9d62e" (UID: "8211e149-f236-498d-bc79-183c39d9d62e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:18 crc kubenswrapper[4833]: I0219 13:05:18.946676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-config" (OuterVolumeSpecName: "config") pod "8211e149-f236-498d-bc79-183c39d9d62e" (UID: "8211e149-f236-498d-bc79-183c39d9d62e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.008610 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.008650 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8211e149-f236-498d-bc79-183c39d9d62e-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.008663 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj742\" (UniqueName: \"kubernetes.io/projected/8211e149-f236-498d-bc79-183c39d9d62e-kube-api-access-gj742\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.024332 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5bc667ffbb-qqgnx"] Feb 19 13:05:19 crc kubenswrapper[4833]: W0219 13:05:19.032567 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbbfe4179_53a2_4a74_9045_7a498c9aad70.slice/crio-a80acf5d9b9ba7656c767add6059ad421d16a2f6ceb88f7c50fce0faad400c90 WatchSource:0}: Error finding container a80acf5d9b9ba7656c767add6059ad421d16a2f6ceb88f7c50fce0faad400c90: Status 404 returned error can't find the container with id a80acf5d9b9ba7656c767add6059ad421d16a2f6ceb88f7c50fce0faad400c90 Feb 19 13:05:19 crc kubenswrapper[4833]: W0219 13:05:19.033720 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88341f77_7fab_4dba_be1d_8e11becd2953.slice/crio-cbb00767b9e2c01ac8c07ba4e921649ee4a3afb18fdcf2a6c5eebd61580a1c29 WatchSource:0}: Error finding container cbb00767b9e2c01ac8c07ba4e921649ee4a3afb18fdcf2a6c5eebd61580a1c29: Status 404 returned error can't find the container with id cbb00767b9e2c01ac8c07ba4e921649ee4a3afb18fdcf2a6c5eebd61580a1c29 Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.035132 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b954444d4-2mwt9"] Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.099042 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:05:19 crc kubenswrapper[4833]: W0219 13:05:19.105760 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc02fc0d_426e_41b7_a8c4_8f5aed508b6b.slice/crio-f5a6e1e0cf6ac6bd95d7e217f2ef7b948c18699b6154ef7178717fe9821a6597 WatchSource:0}: Error finding container f5a6e1e0cf6ac6bd95d7e217f2ef7b948c18699b6154ef7178717fe9821a6597: Status 404 returned error can't find the container with id f5a6e1e0cf6ac6bd95d7e217f2ef7b948c18699b6154ef7178717fe9821a6597 Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.190404 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hjsb7"] Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.466594 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-m6ckw" event={"ID":"8211e149-f236-498d-bc79-183c39d9d62e","Type":"ContainerDied","Data":"d92d961248aae8f7f767e6ae56eb25c8264184535b49e28e77a3bad1f57ba024"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.467853 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92d961248aae8f7f767e6ae56eb25c8264184535b49e28e77a3bad1f57ba024" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.467941 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-m6ckw" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.477047 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bc667ffbb-qqgnx" event={"ID":"bbfe4179-53a2-4a74-9045-7a498c9aad70","Type":"ContainerStarted","Data":"7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.477108 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bc667ffbb-qqgnx" event={"ID":"bbfe4179-53a2-4a74-9045-7a498c9aad70","Type":"ContainerStarted","Data":"a80acf5d9b9ba7656c767add6059ad421d16a2f6ceb88f7c50fce0faad400c90"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.485324 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7594f6fd59-hxk8t" podUID="a6ff486b-931e-4973-9bac-5d68a07e9991" containerName="horizon-log" containerID="cri-o://d65311c3ccec03d5f9352f0d0b8a53a3e84a5e84ddb604a730b44b0e95c93070" gracePeriod=30 Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.485356 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7594f6fd59-hxk8t" event={"ID":"a6ff486b-931e-4973-9bac-5d68a07e9991","Type":"ContainerStarted","Data":"3d70c6239ac05d67654bb3afebc44385962eceb9e68f31b0b04f547ba1eec565"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.485402 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7594f6fd59-hxk8t" event={"ID":"a6ff486b-931e-4973-9bac-5d68a07e9991","Type":"ContainerStarted","Data":"d65311c3ccec03d5f9352f0d0b8a53a3e84a5e84ddb604a730b44b0e95c93070"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.485438 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7594f6fd59-hxk8t" podUID="a6ff486b-931e-4973-9bac-5d68a07e9991" containerName="horizon" containerID="cri-o://3d70c6239ac05d67654bb3afebc44385962eceb9e68f31b0b04f547ba1eec565" gracePeriod=30 Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.491939 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b954444d4-2mwt9" event={"ID":"88341f77-7fab-4dba-be1d-8e11becd2953","Type":"ContainerStarted","Data":"7ae6e2b342195d7d61c3069b7f8c93ac9d5af6ff1f1dde2cd11eb1195f49654e"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.491984 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b954444d4-2mwt9" event={"ID":"88341f77-7fab-4dba-be1d-8e11becd2953","Type":"ContainerStarted","Data":"cbb00767b9e2c01ac8c07ba4e921649ee4a3afb18fdcf2a6c5eebd61580a1c29"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.496525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef3a268-01cc-4ba4-b7cc-628bb6328271","Type":"ContainerStarted","Data":"5b6d9758d28dcd5c467432f262b83933bd6760d53adeea57286377fdcc71b9ca"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.499231 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c6c854b45-nzb7k" event={"ID":"742dbb25-3782-45e5-931a-185f8a98d24d","Type":"ContainerStarted","Data":"3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.499271 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c6c854b45-nzb7k" event={"ID":"742dbb25-3782-45e5-931a-185f8a98d24d","Type":"ContainerStarted","Data":"784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.499366 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c6c854b45-nzb7k" podUID="742dbb25-3782-45e5-931a-185f8a98d24d" containerName="horizon-log" containerID="cri-o://784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4" gracePeriod=30 Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.499609 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6c6c854b45-nzb7k" podUID="742dbb25-3782-45e5-931a-185f8a98d24d" containerName="horizon" containerID="cri-o://3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1" gracePeriod=30 Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.504238 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da1e5208-5817-401e-bfbb-22088b43b335","Type":"ContainerStarted","Data":"3861702f02b0ffcdfc8bcda55dd24c5496d69f67743b92ab2b2f57746892751d"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.508780 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b","Type":"ContainerStarted","Data":"f5a6e1e0cf6ac6bd95d7e217f2ef7b948c18699b6154ef7178717fe9821a6597"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.514054 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjsb7" event={"ID":"0bf42233-f79a-4f59-9db4-2aab3744b616","Type":"ContainerStarted","Data":"6412669b08edf51a42282e218b19aae7620f1f2d8ee154fa4c8a3a037e3b0ca3"} Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.518137 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7594f6fd59-hxk8t" podStartSLOduration=4.914257446 podStartE2EDuration="31.518115172s" podCreationTimestamp="2026-02-19 13:04:48 +0000 UTC" firstStartedPulling="2026-02-19 13:04:50.113034174 +0000 UTC m=+1100.508552942" lastFinishedPulling="2026-02-19 13:05:16.7168919 +0000 UTC m=+1127.112410668" observedRunningTime="2026-02-19 13:05:19.510653845 +0000 UTC m=+1129.906172613" watchObservedRunningTime="2026-02-19 13:05:19.518115172 +0000 UTC m=+1129.913633950" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.534385 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6c6c854b45-nzb7k" podStartSLOduration=2.7651823540000002 podStartE2EDuration="27.534367294s" podCreationTimestamp="2026-02-19 13:04:52 +0000 UTC" firstStartedPulling="2026-02-19 13:04:53.602566976 +0000 UTC m=+1103.998085744" lastFinishedPulling="2026-02-19 13:05:18.371751906 +0000 UTC m=+1128.767270684" observedRunningTime="2026-02-19 13:05:19.529752046 +0000 UTC m=+1129.925270834" watchObservedRunningTime="2026-02-19 13:05:19.534367294 +0000 UTC m=+1129.929886062" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.725841 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hq965"] Feb 19 13:05:19 crc kubenswrapper[4833]: E0219 13:05:19.726390 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerName="dnsmasq-dns" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.726405 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerName="dnsmasq-dns" Feb 19 13:05:19 crc kubenswrapper[4833]: E0219 13:05:19.726432 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8211e149-f236-498d-bc79-183c39d9d62e" containerName="neutron-db-sync" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.726438 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8211e149-f236-498d-bc79-183c39d9d62e" containerName="neutron-db-sync" Feb 19 13:05:19 crc kubenswrapper[4833]: E0219 13:05:19.726449 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerName="init" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.726455 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerName="init" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.726633 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerName="dnsmasq-dns" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.726652 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8211e149-f236-498d-bc79-183c39d9d62e" containerName="neutron-db-sync" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.738735 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hq965"] Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.743887 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.790209 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c68546898-xbhvb"] Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.791522 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.796733 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.796783 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nh77q" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.796911 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.797071 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.830881 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.830932 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-svc\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.830989 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-config\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.831026 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfx5h\" (UniqueName: \"kubernetes.io/projected/99a828fb-7fd8-432c-890f-2cebf8e2afad-kube-api-access-xfx5h\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.831098 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.831160 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.831658 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c68546898-xbhvb"] Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.933569 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-httpd-config\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.933629 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.933652 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-ovndb-tls-certs\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.933679 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.933708 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-svc\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.933758 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-config\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.933786 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-config\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.933818 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfx5h\" (UniqueName: \"kubernetes.io/projected/99a828fb-7fd8-432c-890f-2cebf8e2afad-kube-api-access-xfx5h\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.933836 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-combined-ca-bundle\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.933853 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkxgm\" (UniqueName: \"kubernetes.io/projected/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-kube-api-access-tkxgm\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.933877 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.934993 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.935631 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.936180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.936779 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-svc\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.937446 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-config\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:19 crc kubenswrapper[4833]: I0219 13:05:19.967827 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfx5h\" (UniqueName: \"kubernetes.io/projected/99a828fb-7fd8-432c-890f-2cebf8e2afad-kube-api-access-xfx5h\") pod \"dnsmasq-dns-6b7b667979-hq965\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.036298 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-httpd-config\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.036365 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-ovndb-tls-certs\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.036460 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-config\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.036525 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-combined-ca-bundle\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.036553 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkxgm\" (UniqueName: \"kubernetes.io/projected/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-kube-api-access-tkxgm\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.041959 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-httpd-config\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.043584 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-config\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.044193 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-ovndb-tls-certs\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.044310 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-combined-ca-bundle\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.056291 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkxgm\" (UniqueName: \"kubernetes.io/projected/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-kube-api-access-tkxgm\") pod \"neutron-5c68546898-xbhvb\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.132828 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.142471 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.561592 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bc667ffbb-qqgnx" event={"ID":"bbfe4179-53a2-4a74-9045-7a498c9aad70","Type":"ContainerStarted","Data":"da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c"} Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.577269 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da1e5208-5817-401e-bfbb-22088b43b335","Type":"ContainerStarted","Data":"db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5"} Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.593897 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5bc667ffbb-qqgnx" podStartSLOduration=23.593878257 podStartE2EDuration="23.593878257s" podCreationTimestamp="2026-02-19 13:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:20.580173677 +0000 UTC m=+1130.975692455" watchObservedRunningTime="2026-02-19 13:05:20.593878257 +0000 UTC m=+1130.989397025" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.599821 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b","Type":"ContainerStarted","Data":"c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91"} Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.614975 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjsb7" event={"ID":"0bf42233-f79a-4f59-9db4-2aab3744b616","Type":"ContainerStarted","Data":"84126134a7ea5b8ebd3145f958543ebbf0257aa351da7b992b8d4c9a1e9a16da"} Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.637833 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b954444d4-2mwt9" event={"ID":"88341f77-7fab-4dba-be1d-8e11becd2953","Type":"ContainerStarted","Data":"da0892517fecf74156492c550e6ccd8f01af17905b2c7e98adc293bcbb9a042b"} Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.644243 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hjsb7" podStartSLOduration=21.644220325 podStartE2EDuration="21.644220325s" podCreationTimestamp="2026-02-19 13:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:20.633110327 +0000 UTC m=+1131.028629095" watchObservedRunningTime="2026-02-19 13:05:20.644220325 +0000 UTC m=+1131.039739093" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.659089 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b954444d4-2mwt9" podStartSLOduration=22.659069168 podStartE2EDuration="22.659069168s" podCreationTimestamp="2026-02-19 13:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:20.6588019 +0000 UTC m=+1131.054320668" watchObservedRunningTime="2026-02-19 13:05:20.659069168 +0000 UTC m=+1131.054587936" Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.775272 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hq965"] Feb 19 13:05:20 crc kubenswrapper[4833]: I0219 13:05:20.963110 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c68546898-xbhvb"] Feb 19 13:05:20 crc kubenswrapper[4833]: W0219 13:05:20.999268 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e7a995c_7ce5_4685_a4d3_ec7c4cb807dd.slice/crio-501d5127b11a1c453c17c1d20db4a6ee38463124af7d915516dfb70ed3e52856 WatchSource:0}: Error finding container 501d5127b11a1c453c17c1d20db4a6ee38463124af7d915516dfb70ed3e52856: Status 404 returned error can't find the container with id 501d5127b11a1c453c17c1d20db4a6ee38463124af7d915516dfb70ed3e52856 Feb 19 13:05:21 crc kubenswrapper[4833]: I0219 13:05:21.356935 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-p9qjz" podUID="5e56bcf5-cac7-4e98-a3b0-43430ecf891e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Feb 19 13:05:21 crc kubenswrapper[4833]: I0219 13:05:21.652147 4833 generic.go:334] "Generic (PLEG): container finished" podID="99a828fb-7fd8-432c-890f-2cebf8e2afad" containerID="a685e5b34f15ba2f6c9efea24f3f9682b9d3051fed32e0c5afbb2aea44e71478" exitCode=0 Feb 19 13:05:21 crc kubenswrapper[4833]: I0219 13:05:21.652648 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-hq965" event={"ID":"99a828fb-7fd8-432c-890f-2cebf8e2afad","Type":"ContainerDied","Data":"a685e5b34f15ba2f6c9efea24f3f9682b9d3051fed32e0c5afbb2aea44e71478"} Feb 19 13:05:21 crc kubenswrapper[4833]: I0219 13:05:21.652685 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-hq965" event={"ID":"99a828fb-7fd8-432c-890f-2cebf8e2afad","Type":"ContainerStarted","Data":"55fb4fcb8be56b12eca21c54ec973b74ffa083ba9535bbb1cc1b000123de5ea8"} Feb 19 13:05:21 crc kubenswrapper[4833]: I0219 13:05:21.674388 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c68546898-xbhvb" event={"ID":"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd","Type":"ContainerStarted","Data":"b3b373fdaba482359d4a41c0b87a9aa0ac363f6d6c8f0b744a1e739cd266cfc2"} Feb 19 13:05:21 crc kubenswrapper[4833]: I0219 13:05:21.674463 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c68546898-xbhvb" event={"ID":"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd","Type":"ContainerStarted","Data":"501d5127b11a1c453c17c1d20db4a6ee38463124af7d915516dfb70ed3e52856"} Feb 19 13:05:21 crc kubenswrapper[4833]: I0219 13:05:21.676764 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da1e5208-5817-401e-bfbb-22088b43b335","Type":"ContainerStarted","Data":"c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a"} Feb 19 13:05:21 crc kubenswrapper[4833]: I0219 13:05:21.679955 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b","Type":"ContainerStarted","Data":"a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca"} Feb 19 13:05:21 crc kubenswrapper[4833]: I0219 13:05:21.719619 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=22.719599259 podStartE2EDuration="22.719599259s" podCreationTimestamp="2026-02-19 13:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:21.714067035 +0000 UTC m=+1132.109585803" watchObservedRunningTime="2026-02-19 13:05:21.719599259 +0000 UTC m=+1132.115118037" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.042432 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=23.042406514 podStartE2EDuration="23.042406514s" podCreationTimestamp="2026-02-19 13:04:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:21.748822381 +0000 UTC m=+1132.144341149" watchObservedRunningTime="2026-02-19 13:05:22.042406514 +0000 UTC m=+1132.437925282" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.047560 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-84df7d698f-5xhlf"] Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.049404 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.056440 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.056651 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.093331 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84df7d698f-5xhlf"] Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.104630 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-ovndb-tls-certs\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.104730 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-public-tls-certs\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.104766 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-combined-ca-bundle\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.104829 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-config\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.104868 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k8hf\" (UniqueName: \"kubernetes.io/projected/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-kube-api-access-6k8hf\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.104914 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-internal-tls-certs\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.104941 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-httpd-config\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.206725 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-ovndb-tls-certs\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.206810 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-public-tls-certs\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.206842 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-combined-ca-bundle\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.206898 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-config\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.206927 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k8hf\" (UniqueName: \"kubernetes.io/projected/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-kube-api-access-6k8hf\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.206967 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-httpd-config\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.206988 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-internal-tls-certs\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.216760 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-public-tls-certs\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.217584 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-config\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.219154 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-combined-ca-bundle\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.220554 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-httpd-config\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.220731 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-ovndb-tls-certs\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.225333 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-internal-tls-certs\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.226053 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k8hf\" (UniqueName: \"kubernetes.io/projected/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-kube-api-access-6k8hf\") pod \"neutron-84df7d698f-5xhlf\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.368683 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.707449 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c68546898-xbhvb" event={"ID":"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd","Type":"ContainerStarted","Data":"c557dfc5ea7cd8c7844b22884524ffed34f0427c6b016924b3037f3635629ef6"} Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.708709 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.719297 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-hq965" event={"ID":"99a828fb-7fd8-432c-890f-2cebf8e2afad","Type":"ContainerStarted","Data":"d86d447a36669ce8992ef9ecdd5922ef4f999085d397e80792cbb4ee6e82ce56"} Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.719770 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.784691 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-hq965" podStartSLOduration=3.784671457 podStartE2EDuration="3.784671457s" podCreationTimestamp="2026-02-19 13:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:22.783570187 +0000 UTC m=+1133.179088955" watchObservedRunningTime="2026-02-19 13:05:22.784671457 +0000 UTC m=+1133.180190225" Feb 19 13:05:22 crc kubenswrapper[4833]: I0219 13:05:22.784801 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c68546898-xbhvb" podStartSLOduration=3.784797421 podStartE2EDuration="3.784797421s" podCreationTimestamp="2026-02-19 13:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:22.737837497 +0000 UTC m=+1133.133356275" watchObservedRunningTime="2026-02-19 13:05:22.784797421 +0000 UTC m=+1133.180316189" Feb 19 13:05:23 crc kubenswrapper[4833]: I0219 13:05:23.033426 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:05:23 crc kubenswrapper[4833]: I0219 13:05:23.041127 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-84df7d698f-5xhlf"] Feb 19 13:05:23 crc kubenswrapper[4833]: I0219 13:05:23.734452 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htb5q" event={"ID":"e40f6228-f038-4dc4-9180-f399b9a8c30b","Type":"ContainerStarted","Data":"173d88ab4c363a41bd4f046280df21d5e7eefc8835360560ad09c58b0eb7ba52"} Feb 19 13:05:23 crc kubenswrapper[4833]: I0219 13:05:23.752484 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-htb5q" podStartSLOduration=3.710397974 podStartE2EDuration="35.752467074s" podCreationTimestamp="2026-02-19 13:04:48 +0000 UTC" firstStartedPulling="2026-02-19 13:04:50.510964634 +0000 UTC m=+1100.906483402" lastFinishedPulling="2026-02-19 13:05:22.553033734 +0000 UTC m=+1132.948552502" observedRunningTime="2026-02-19 13:05:23.747221228 +0000 UTC m=+1134.142740006" watchObservedRunningTime="2026-02-19 13:05:23.752467074 +0000 UTC m=+1134.147985842" Feb 19 13:05:24 crc kubenswrapper[4833]: I0219 13:05:24.746936 4833 generic.go:334] "Generic (PLEG): container finished" podID="0bf42233-f79a-4f59-9db4-2aab3744b616" containerID="84126134a7ea5b8ebd3145f958543ebbf0257aa351da7b992b8d4c9a1e9a16da" exitCode=0 Feb 19 13:05:24 crc kubenswrapper[4833]: I0219 13:05:24.747768 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjsb7" event={"ID":"0bf42233-f79a-4f59-9db4-2aab3744b616","Type":"ContainerDied","Data":"84126134a7ea5b8ebd3145f958543ebbf0257aa351da7b992b8d4c9a1e9a16da"} Feb 19 13:05:25 crc kubenswrapper[4833]: I0219 13:05:25.756938 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84df7d698f-5xhlf" event={"ID":"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae","Type":"ContainerStarted","Data":"cc99e463b036172f50b14fa9a70323a56f34fb1d5242dd065ce40489a219c649"} Feb 19 13:05:25 crc kubenswrapper[4833]: I0219 13:05:25.758970 4833 generic.go:334] "Generic (PLEG): container finished" podID="e40f6228-f038-4dc4-9180-f399b9a8c30b" containerID="173d88ab4c363a41bd4f046280df21d5e7eefc8835360560ad09c58b0eb7ba52" exitCode=0 Feb 19 13:05:25 crc kubenswrapper[4833]: I0219 13:05:25.759069 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htb5q" event={"ID":"e40f6228-f038-4dc4-9180-f399b9a8c30b","Type":"ContainerDied","Data":"173d88ab4c363a41bd4f046280df21d5e7eefc8835360560ad09c58b0eb7ba52"} Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.043603 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.048462 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htb5q" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.220965 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-combined-ca-bundle\") pod \"e40f6228-f038-4dc4-9180-f399b9a8c30b\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.221366 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e40f6228-f038-4dc4-9180-f399b9a8c30b-logs\") pod \"e40f6228-f038-4dc4-9180-f399b9a8c30b\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.221432 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-credential-keys\") pod \"0bf42233-f79a-4f59-9db4-2aab3744b616\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.221536 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-config-data\") pod \"e40f6228-f038-4dc4-9180-f399b9a8c30b\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.221609 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-combined-ca-bundle\") pod \"0bf42233-f79a-4f59-9db4-2aab3744b616\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.221726 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-config-data\") pod \"0bf42233-f79a-4f59-9db4-2aab3744b616\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.221839 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e40f6228-f038-4dc4-9180-f399b9a8c30b-logs" (OuterVolumeSpecName: "logs") pod "e40f6228-f038-4dc4-9180-f399b9a8c30b" (UID: "e40f6228-f038-4dc4-9180-f399b9a8c30b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.222173 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-fernet-keys\") pod \"0bf42233-f79a-4f59-9db4-2aab3744b616\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.222647 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sc72\" (UniqueName: \"kubernetes.io/projected/e40f6228-f038-4dc4-9180-f399b9a8c30b-kube-api-access-2sc72\") pod \"e40f6228-f038-4dc4-9180-f399b9a8c30b\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.222708 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97c7g\" (UniqueName: \"kubernetes.io/projected/0bf42233-f79a-4f59-9db4-2aab3744b616-kube-api-access-97c7g\") pod \"0bf42233-f79a-4f59-9db4-2aab3744b616\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.222751 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-scripts\") pod \"e40f6228-f038-4dc4-9180-f399b9a8c30b\" (UID: \"e40f6228-f038-4dc4-9180-f399b9a8c30b\") " Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.222819 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-scripts\") pod \"0bf42233-f79a-4f59-9db4-2aab3744b616\" (UID: \"0bf42233-f79a-4f59-9db4-2aab3744b616\") " Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.223742 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e40f6228-f038-4dc4-9180-f399b9a8c30b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.228813 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0bf42233-f79a-4f59-9db4-2aab3744b616" (UID: "0bf42233-f79a-4f59-9db4-2aab3744b616"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.232798 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-scripts" (OuterVolumeSpecName: "scripts") pod "0bf42233-f79a-4f59-9db4-2aab3744b616" (UID: "0bf42233-f79a-4f59-9db4-2aab3744b616"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.236330 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e40f6228-f038-4dc4-9180-f399b9a8c30b-kube-api-access-2sc72" (OuterVolumeSpecName: "kube-api-access-2sc72") pod "e40f6228-f038-4dc4-9180-f399b9a8c30b" (UID: "e40f6228-f038-4dc4-9180-f399b9a8c30b"). InnerVolumeSpecName "kube-api-access-2sc72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.236480 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bf42233-f79a-4f59-9db4-2aab3744b616-kube-api-access-97c7g" (OuterVolumeSpecName: "kube-api-access-97c7g") pod "0bf42233-f79a-4f59-9db4-2aab3744b616" (UID: "0bf42233-f79a-4f59-9db4-2aab3744b616"). InnerVolumeSpecName "kube-api-access-97c7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.236433 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0bf42233-f79a-4f59-9db4-2aab3744b616" (UID: "0bf42233-f79a-4f59-9db4-2aab3744b616"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.248822 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-scripts" (OuterVolumeSpecName: "scripts") pod "e40f6228-f038-4dc4-9180-f399b9a8c30b" (UID: "e40f6228-f038-4dc4-9180-f399b9a8c30b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.254890 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-config-data" (OuterVolumeSpecName: "config-data") pod "0bf42233-f79a-4f59-9db4-2aab3744b616" (UID: "0bf42233-f79a-4f59-9db4-2aab3744b616"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.263205 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bf42233-f79a-4f59-9db4-2aab3744b616" (UID: "0bf42233-f79a-4f59-9db4-2aab3744b616"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.271550 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-config-data" (OuterVolumeSpecName: "config-data") pod "e40f6228-f038-4dc4-9180-f399b9a8c30b" (UID: "e40f6228-f038-4dc4-9180-f399b9a8c30b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.282749 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e40f6228-f038-4dc4-9180-f399b9a8c30b" (UID: "e40f6228-f038-4dc4-9180-f399b9a8c30b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.324422 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.324459 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.324473 4833 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.324484 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.324515 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.324527 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.324537 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0bf42233-f79a-4f59-9db4-2aab3744b616-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.324549 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sc72\" (UniqueName: \"kubernetes.io/projected/e40f6228-f038-4dc4-9180-f399b9a8c30b-kube-api-access-2sc72\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.324562 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97c7g\" (UniqueName: \"kubernetes.io/projected/0bf42233-f79a-4f59-9db4-2aab3744b616-kube-api-access-97c7g\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.324572 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e40f6228-f038-4dc4-9180-f399b9a8c30b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.334665 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.334705 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.417148 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.417807 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.522235 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.804797 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84df7d698f-5xhlf" event={"ID":"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae","Type":"ContainerStarted","Data":"517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986"} Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.806393 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjsb7" event={"ID":"0bf42233-f79a-4f59-9db4-2aab3744b616","Type":"ContainerDied","Data":"6412669b08edf51a42282e218b19aae7620f1f2d8ee154fa4c8a3a037e3b0ca3"} Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.806425 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6412669b08edf51a42282e218b19aae7620f1f2d8ee154fa4c8a3a037e3b0ca3" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.806529 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjsb7" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.810843 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-htb5q" event={"ID":"e40f6228-f038-4dc4-9180-f399b9a8c30b","Type":"ContainerDied","Data":"8ca3faed5cd89bceafb79b5ed3b2503df056c64eb30a2150feb3731e91390a5a"} Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.810894 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ca3faed5cd89bceafb79b5ed3b2503df056c64eb30a2150feb3731e91390a5a" Feb 19 13:05:28 crc kubenswrapper[4833]: I0219 13:05:28.810849 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-htb5q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.180629 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-54c7bb578f-26gwx"] Feb 19 13:05:29 crc kubenswrapper[4833]: E0219 13:05:29.181451 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bf42233-f79a-4f59-9db4-2aab3744b616" containerName="keystone-bootstrap" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.181468 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bf42233-f79a-4f59-9db4-2aab3744b616" containerName="keystone-bootstrap" Feb 19 13:05:29 crc kubenswrapper[4833]: E0219 13:05:29.181509 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e40f6228-f038-4dc4-9180-f399b9a8c30b" containerName="placement-db-sync" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.181517 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e40f6228-f038-4dc4-9180-f399b9a8c30b" containerName="placement-db-sync" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.181724 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e40f6228-f038-4dc4-9180-f399b9a8c30b" containerName="placement-db-sync" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.181769 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bf42233-f79a-4f59-9db4-2aab3744b616" containerName="keystone-bootstrap" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.182482 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.190815 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.190940 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.191118 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.191335 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.192922 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lhmgj" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.193125 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.196150 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54c7bb578f-26gwx"] Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.243525 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-internal-tls-certs\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.243571 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-fernet-keys\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.243611 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-credential-keys\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.243630 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-combined-ca-bundle\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.243678 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-config-data\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.243701 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-public-tls-certs\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.243728 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5tgp\" (UniqueName: \"kubernetes.io/projected/a880b98a-d4ab-49dd-bc84-ff52c67c5432-kube-api-access-k5tgp\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.243775 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-scripts\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.270657 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-b8757d6bd-6749q"] Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.275515 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.277822 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.278179 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.278182 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.278218 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vxzrm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.278414 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.280888 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b8757d6bd-6749q"] Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.369338 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-config-data\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.369406 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-public-tls-certs\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.369455 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5tgp\" (UniqueName: \"kubernetes.io/projected/a880b98a-d4ab-49dd-bc84-ff52c67c5432-kube-api-access-k5tgp\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.369599 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-scripts\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.372781 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-scripts\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.372911 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-internal-tls-certs\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.372939 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-fernet-keys\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.372961 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-internal-tls-certs\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.373003 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-public-tls-certs\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.373018 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7tw\" (UniqueName: \"kubernetes.io/projected/315865dd-deeb-4ad9-8cce-15b7df356b6c-kube-api-access-th7tw\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.373044 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-combined-ca-bundle\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.373071 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-credential-keys\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.373829 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/315865dd-deeb-4ad9-8cce-15b7df356b6c-logs\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.373860 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-combined-ca-bundle\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.373884 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-config-data\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.380300 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-combined-ca-bundle\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.381831 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-config-data\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.399556 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-public-tls-certs\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.400862 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5tgp\" (UniqueName: \"kubernetes.io/projected/a880b98a-d4ab-49dd-bc84-ff52c67c5432-kube-api-access-k5tgp\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.403913 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-scripts\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.405754 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-internal-tls-certs\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.408788 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-credential-keys\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.420395 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a880b98a-d4ab-49dd-bc84-ff52c67c5432-fernet-keys\") pod \"keystone-54c7bb578f-26gwx\" (UID: \"a880b98a-d4ab-49dd-bc84-ff52c67c5432\") " pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.475735 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-scripts\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.475803 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-internal-tls-certs\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.475828 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-public-tls-certs\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.475842 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th7tw\" (UniqueName: \"kubernetes.io/projected/315865dd-deeb-4ad9-8cce-15b7df356b6c-kube-api-access-th7tw\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.475860 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-combined-ca-bundle\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.475878 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/315865dd-deeb-4ad9-8cce-15b7df356b6c-logs\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.475894 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-config-data\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.478881 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/315865dd-deeb-4ad9-8cce-15b7df356b6c-logs\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.479899 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-config-data\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.481116 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-public-tls-certs\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.482737 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-scripts\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.483540 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-internal-tls-certs\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.517680 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.532271 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-combined-ca-bundle\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.543557 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-64f9d5d984-h9kbm"] Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.549636 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.557223 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th7tw\" (UniqueName: \"kubernetes.io/projected/315865dd-deeb-4ad9-8cce-15b7df356b6c-kube-api-access-th7tw\") pod \"placement-b8757d6bd-6749q\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.561812 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64f9d5d984-h9kbm"] Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.590700 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.691094 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-config-data\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.691446 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-combined-ca-bundle\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.691475 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-internal-tls-certs\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.691522 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65rcn\" (UniqueName: \"kubernetes.io/projected/5f9f5174-162c-418a-8f37-09af448d7716-kube-api-access-65rcn\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.691687 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-public-tls-certs\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.691868 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-scripts\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.691949 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f9f5174-162c-418a-8f37-09af448d7716-logs\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.712997 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.713040 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.713058 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.713069 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.761171 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.764576 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.793870 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65rcn\" (UniqueName: \"kubernetes.io/projected/5f9f5174-162c-418a-8f37-09af448d7716-kube-api-access-65rcn\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.793956 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-public-tls-certs\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.794082 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-scripts\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.794118 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f9f5174-162c-418a-8f37-09af448d7716-logs\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.794150 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-config-data\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.794189 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-combined-ca-bundle\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.794215 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-internal-tls-certs\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.799439 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5f9f5174-162c-418a-8f37-09af448d7716-logs\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.802397 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-internal-tls-certs\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.802877 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-combined-ca-bundle\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.805145 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-config-data\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.805452 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-public-tls-certs\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.816458 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5f9f5174-162c-418a-8f37-09af448d7716-scripts\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.825134 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84df7d698f-5xhlf" event={"ID":"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae","Type":"ContainerStarted","Data":"81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876"} Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.830134 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65rcn\" (UniqueName: \"kubernetes.io/projected/5f9f5174-162c-418a-8f37-09af448d7716-kube-api-access-65rcn\") pod \"placement-64f9d5d984-h9kbm\" (UID: \"5f9f5174-162c-418a-8f37-09af448d7716\") " pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:29 crc kubenswrapper[4833]: I0219 13:05:29.928621 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.007386 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.008397 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.008414 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.008541 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.073371 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.077441 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-54c7bb578f-26gwx"] Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.080210 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.135733 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.265163 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-djbdf"] Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.265354 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" podUID="ca786541-c266-41b5-a91d-3d626d530b45" containerName="dnsmasq-dns" containerID="cri-o://a799a1f97eb8d698cce065c431af26bed30ba6d576f4e5cb99b0c7e556da4cdb" gracePeriod=10 Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.280997 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-b8757d6bd-6749q"] Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.632128 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-64f9d5d984-h9kbm"] Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.834033 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64f9d5d984-h9kbm" event={"ID":"5f9f5174-162c-418a-8f37-09af448d7716","Type":"ContainerStarted","Data":"017010e96fa092365b8b4f0615ff11b6dd4f61d13c9a2ebd2c6bc3c0187ec905"} Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.835514 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54c7bb578f-26gwx" event={"ID":"a880b98a-d4ab-49dd-bc84-ff52c67c5432","Type":"ContainerStarted","Data":"ec40cd9429c16df39e829b75b07d55b7775493d08d25be57bcd110dabffbbc2d"} Feb 19 13:05:30 crc kubenswrapper[4833]: I0219 13:05:30.836778 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b8757d6bd-6749q" event={"ID":"315865dd-deeb-4ad9-8cce-15b7df356b6c","Type":"ContainerStarted","Data":"e38383ccb04c1ca9d24dfdb1f59d23dc001b01f71c23f809ac4fa9dc6d6fc550"} Feb 19 13:05:31 crc kubenswrapper[4833]: I0219 13:05:31.329988 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-84df7d698f-5xhlf" podStartSLOduration=9.329970438 podStartE2EDuration="9.329970438s" podCreationTimestamp="2026-02-19 13:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:30.872320018 +0000 UTC m=+1141.267838786" watchObservedRunningTime="2026-02-19 13:05:31.329970438 +0000 UTC m=+1141.725489206" Feb 19 13:05:31 crc kubenswrapper[4833]: I0219 13:05:31.845484 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-54c7bb578f-26gwx" event={"ID":"a880b98a-d4ab-49dd-bc84-ff52c67c5432","Type":"ContainerStarted","Data":"b5d880d162868ac6ac75f19fbd237d44575c0008054c961e5909ad4e68d844f5"} Feb 19 13:05:31 crc kubenswrapper[4833]: I0219 13:05:31.847900 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b8757d6bd-6749q" event={"ID":"315865dd-deeb-4ad9-8cce-15b7df356b6c","Type":"ContainerStarted","Data":"e7621c18f354105869f8b96b2124f21516919184cd1d3c15215fa7c4984c85fe"} Feb 19 13:05:31 crc kubenswrapper[4833]: I0219 13:05:31.850401 4833 generic.go:334] "Generic (PLEG): container finished" podID="ca786541-c266-41b5-a91d-3d626d530b45" containerID="a799a1f97eb8d698cce065c431af26bed30ba6d576f4e5cb99b0c7e556da4cdb" exitCode=0 Feb 19 13:05:31 crc kubenswrapper[4833]: I0219 13:05:31.850502 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" event={"ID":"ca786541-c266-41b5-a91d-3d626d530b45","Type":"ContainerDied","Data":"a799a1f97eb8d698cce065c431af26bed30ba6d576f4e5cb99b0c7e556da4cdb"} Feb 19 13:05:31 crc kubenswrapper[4833]: I0219 13:05:31.852377 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64f9d5d984-h9kbm" event={"ID":"5f9f5174-162c-418a-8f37-09af448d7716","Type":"ContainerStarted","Data":"bceca4cdeceb583aaf6f453ea683dd21d6a88e73e786355ba33bbb579104045d"} Feb 19 13:05:32 crc kubenswrapper[4833]: I0219 13:05:32.215183 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 13:05:32 crc kubenswrapper[4833]: I0219 13:05:32.215288 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:05:32 crc kubenswrapper[4833]: I0219 13:05:32.234320 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 13:05:32 crc kubenswrapper[4833]: I0219 13:05:32.877484 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-54c7bb578f-26gwx" podStartSLOduration=3.877466712 podStartE2EDuration="3.877466712s" podCreationTimestamp="2026-02-19 13:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:32.87415182 +0000 UTC m=+1143.269670608" watchObservedRunningTime="2026-02-19 13:05:32.877466712 +0000 UTC m=+1143.272985480" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.038858 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.069141 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.069246 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.153817 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.174951 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-sb\") pod \"ca786541-c266-41b5-a91d-3d626d530b45\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.175003 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66znt\" (UniqueName: \"kubernetes.io/projected/ca786541-c266-41b5-a91d-3d626d530b45-kube-api-access-66znt\") pod \"ca786541-c266-41b5-a91d-3d626d530b45\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.175082 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-swift-storage-0\") pod \"ca786541-c266-41b5-a91d-3d626d530b45\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.175105 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-config\") pod \"ca786541-c266-41b5-a91d-3d626d530b45\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.175123 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-nb\") pod \"ca786541-c266-41b5-a91d-3d626d530b45\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.175203 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-svc\") pod \"ca786541-c266-41b5-a91d-3d626d530b45\" (UID: \"ca786541-c266-41b5-a91d-3d626d530b45\") " Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.227744 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca786541-c266-41b5-a91d-3d626d530b45-kube-api-access-66znt" (OuterVolumeSpecName: "kube-api-access-66znt") pod "ca786541-c266-41b5-a91d-3d626d530b45" (UID: "ca786541-c266-41b5-a91d-3d626d530b45"). InnerVolumeSpecName "kube-api-access-66znt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.281686 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66znt\" (UniqueName: \"kubernetes.io/projected/ca786541-c266-41b5-a91d-3d626d530b45-kube-api-access-66znt\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.323109 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ca786541-c266-41b5-a91d-3d626d530b45" (UID: "ca786541-c266-41b5-a91d-3d626d530b45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.332581 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ca786541-c266-41b5-a91d-3d626d530b45" (UID: "ca786541-c266-41b5-a91d-3d626d530b45"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.335757 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-config" (OuterVolumeSpecName: "config") pod "ca786541-c266-41b5-a91d-3d626d530b45" (UID: "ca786541-c266-41b5-a91d-3d626d530b45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.363907 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ca786541-c266-41b5-a91d-3d626d530b45" (UID: "ca786541-c266-41b5-a91d-3d626d530b45"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.382819 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ca786541-c266-41b5-a91d-3d626d530b45" (UID: "ca786541-c266-41b5-a91d-3d626d530b45"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.386398 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.386429 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.386441 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.386450 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.386461 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ca786541-c266-41b5-a91d-3d626d530b45-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.894149 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef3a268-01cc-4ba4-b7cc-628bb6328271","Type":"ContainerStarted","Data":"cd687e064cb7f68bc62094b50ffacef805b2ecc52b09791ac784529029214652"} Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.906079 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" event={"ID":"ca786541-c266-41b5-a91d-3d626d530b45","Type":"ContainerDied","Data":"0efb3353c1de49eca92c89a1e6fdac180547740ed2732551b5f7176f0c1fbafe"} Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.906101 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-djbdf" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.906126 4833 scope.go:117] "RemoveContainer" containerID="a799a1f97eb8d698cce065c431af26bed30ba6d576f4e5cb99b0c7e556da4cdb" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.923423 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hnxz9" event={"ID":"d5aed427-a4af-40b6-bd9c-10284e0935ce","Type":"ContainerStarted","Data":"38b1634674e1c50f0e04e2286162bbeb49033885f7e7e96701a5067e935c944b"} Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.932553 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-64f9d5d984-h9kbm" event={"ID":"5f9f5174-162c-418a-8f37-09af448d7716","Type":"ContainerStarted","Data":"6d6049249b2c66aabedf295d0e8a89733827c421074df201321962aa3ad3b2c4"} Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.933103 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.933130 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.935886 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b8757d6bd-6749q" event={"ID":"315865dd-deeb-4ad9-8cce-15b7df356b6c","Type":"ContainerStarted","Data":"d01869bb2a8ef1704e5da7b6e3da6a215e0b7263f86c920ad1ac737c8a3e8137"} Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.935908 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.935927 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.937431 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-djbdf"] Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.961782 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-djbdf"] Feb 19 13:05:33 crc kubenswrapper[4833]: I0219 13:05:33.982661 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-b8757d6bd-6749q" podStartSLOduration=4.982645005 podStartE2EDuration="4.982645005s" podCreationTimestamp="2026-02-19 13:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:33.981009509 +0000 UTC m=+1144.376528277" watchObservedRunningTime="2026-02-19 13:05:33.982645005 +0000 UTC m=+1144.378163763" Feb 19 13:05:34 crc kubenswrapper[4833]: I0219 13:05:34.012465 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-hnxz9" podStartSLOduration=2.87448589 podStartE2EDuration="46.012446922s" podCreationTimestamp="2026-02-19 13:04:48 +0000 UTC" firstStartedPulling="2026-02-19 13:04:50.278513499 +0000 UTC m=+1100.674032267" lastFinishedPulling="2026-02-19 13:05:33.416474531 +0000 UTC m=+1143.811993299" observedRunningTime="2026-02-19 13:05:33.993591199 +0000 UTC m=+1144.389109967" watchObservedRunningTime="2026-02-19 13:05:34.012446922 +0000 UTC m=+1144.407965690" Feb 19 13:05:34 crc kubenswrapper[4833]: I0219 13:05:34.017394 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-64f9d5d984-h9kbm" podStartSLOduration=5.017379039 podStartE2EDuration="5.017379039s" podCreationTimestamp="2026-02-19 13:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:34.01597805 +0000 UTC m=+1144.411496828" watchObservedRunningTime="2026-02-19 13:05:34.017379039 +0000 UTC m=+1144.412897807" Feb 19 13:05:34 crc kubenswrapper[4833]: I0219 13:05:34.045105 4833 scope.go:117] "RemoveContainer" containerID="666cc9745dbaf145bec303408c021028637b9a80c48a345aed03f200846fe07a" Feb 19 13:05:34 crc kubenswrapper[4833]: I0219 13:05:34.325809 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca786541-c266-41b5-a91d-3d626d530b45" path="/var/lib/kubelet/pods/ca786541-c266-41b5-a91d-3d626d530b45/volumes" Feb 19 13:05:34 crc kubenswrapper[4833]: I0219 13:05:34.950897 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dsz68" event={"ID":"2902e7f1-6f1b-4b67-a9fa-fd031a961900","Type":"ContainerStarted","Data":"7f14887fc30017601effe9d5f3e501f5612845e000c13764e672a7f1166a72b0"} Feb 19 13:05:34 crc kubenswrapper[4833]: I0219 13:05:34.976530 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-dsz68" podStartSLOduration=3.843003397 podStartE2EDuration="46.976513815s" podCreationTimestamp="2026-02-19 13:04:48 +0000 UTC" firstStartedPulling="2026-02-19 13:04:50.287084468 +0000 UTC m=+1100.682603236" lastFinishedPulling="2026-02-19 13:05:33.420594896 +0000 UTC m=+1143.816113654" observedRunningTime="2026-02-19 13:05:34.97236196 +0000 UTC m=+1145.367880728" watchObservedRunningTime="2026-02-19 13:05:34.976513815 +0000 UTC m=+1145.372032583" Feb 19 13:05:36 crc kubenswrapper[4833]: I0219 13:05:36.969267 4833 generic.go:334] "Generic (PLEG): container finished" podID="d5aed427-a4af-40b6-bd9c-10284e0935ce" containerID="38b1634674e1c50f0e04e2286162bbeb49033885f7e7e96701a5067e935c944b" exitCode=0 Feb 19 13:05:36 crc kubenswrapper[4833]: I0219 13:05:36.969322 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hnxz9" event={"ID":"d5aed427-a4af-40b6-bd9c-10284e0935ce","Type":"ContainerDied","Data":"38b1634674e1c50f0e04e2286162bbeb49033885f7e7e96701a5067e935c944b"} Feb 19 13:05:38 crc kubenswrapper[4833]: I0219 13:05:38.331753 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5bc667ffbb-qqgnx" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 19 13:05:38 crc kubenswrapper[4833]: I0219 13:05:38.424679 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7b954444d4-2mwt9" podUID="88341f77-7fab-4dba-be1d-8e11becd2953" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Feb 19 13:05:38 crc kubenswrapper[4833]: I0219 13:05:38.993372 4833 generic.go:334] "Generic (PLEG): container finished" podID="2902e7f1-6f1b-4b67-a9fa-fd031a961900" containerID="7f14887fc30017601effe9d5f3e501f5612845e000c13764e672a7f1166a72b0" exitCode=0 Feb 19 13:05:38 crc kubenswrapper[4833]: I0219 13:05:38.993432 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dsz68" event={"ID":"2902e7f1-6f1b-4b67-a9fa-fd031a961900","Type":"ContainerDied","Data":"7f14887fc30017601effe9d5f3e501f5612845e000c13764e672a7f1166a72b0"} Feb 19 13:05:40 crc kubenswrapper[4833]: I0219 13:05:40.920370 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.015754 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-hnxz9" event={"ID":"d5aed427-a4af-40b6-bd9c-10284e0935ce","Type":"ContainerDied","Data":"bcf7a4a5d4ff805093ad794ec8cd1f12a74d138f0fdb5168ecb9511e1da34332"} Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.015791 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcf7a4a5d4ff805093ad794ec8cd1f12a74d138f0fdb5168ecb9511e1da34332" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.015869 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-hnxz9" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.039295 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhtf5\" (UniqueName: \"kubernetes.io/projected/d5aed427-a4af-40b6-bd9c-10284e0935ce-kube-api-access-fhtf5\") pod \"d5aed427-a4af-40b6-bd9c-10284e0935ce\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.039591 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-combined-ca-bundle\") pod \"d5aed427-a4af-40b6-bd9c-10284e0935ce\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.039645 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-db-sync-config-data\") pod \"d5aed427-a4af-40b6-bd9c-10284e0935ce\" (UID: \"d5aed427-a4af-40b6-bd9c-10284e0935ce\") " Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.046457 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d5aed427-a4af-40b6-bd9c-10284e0935ce" (UID: "d5aed427-a4af-40b6-bd9c-10284e0935ce"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.048098 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5aed427-a4af-40b6-bd9c-10284e0935ce-kube-api-access-fhtf5" (OuterVolumeSpecName: "kube-api-access-fhtf5") pod "d5aed427-a4af-40b6-bd9c-10284e0935ce" (UID: "d5aed427-a4af-40b6-bd9c-10284e0935ce"). InnerVolumeSpecName "kube-api-access-fhtf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.063754 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5aed427-a4af-40b6-bd9c-10284e0935ce" (UID: "d5aed427-a4af-40b6-bd9c-10284e0935ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.141884 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhtf5\" (UniqueName: \"kubernetes.io/projected/d5aed427-a4af-40b6-bd9c-10284e0935ce-kube-api-access-fhtf5\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.141919 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.141928 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5aed427-a4af-40b6-bd9c-10284e0935ce-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.418938 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dsz68" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.547404 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-config-data\") pod \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.547773 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-scripts\") pod \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.547906 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-combined-ca-bundle\") pod \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.547949 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2902e7f1-6f1b-4b67-a9fa-fd031a961900-etc-machine-id\") pod \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.548012 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bjm8\" (UniqueName: \"kubernetes.io/projected/2902e7f1-6f1b-4b67-a9fa-fd031a961900-kube-api-access-9bjm8\") pod \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.548089 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2902e7f1-6f1b-4b67-a9fa-fd031a961900-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2902e7f1-6f1b-4b67-a9fa-fd031a961900" (UID: "2902e7f1-6f1b-4b67-a9fa-fd031a961900"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.548134 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-db-sync-config-data\") pod \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\" (UID: \"2902e7f1-6f1b-4b67-a9fa-fd031a961900\") " Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.548647 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2902e7f1-6f1b-4b67-a9fa-fd031a961900-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.554653 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2902e7f1-6f1b-4b67-a9fa-fd031a961900" (UID: "2902e7f1-6f1b-4b67-a9fa-fd031a961900"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.554971 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-scripts" (OuterVolumeSpecName: "scripts") pod "2902e7f1-6f1b-4b67-a9fa-fd031a961900" (UID: "2902e7f1-6f1b-4b67-a9fa-fd031a961900"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.555154 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2902e7f1-6f1b-4b67-a9fa-fd031a961900-kube-api-access-9bjm8" (OuterVolumeSpecName: "kube-api-access-9bjm8") pod "2902e7f1-6f1b-4b67-a9fa-fd031a961900" (UID: "2902e7f1-6f1b-4b67-a9fa-fd031a961900"). InnerVolumeSpecName "kube-api-access-9bjm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.572371 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2902e7f1-6f1b-4b67-a9fa-fd031a961900" (UID: "2902e7f1-6f1b-4b67-a9fa-fd031a961900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.597369 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-config-data" (OuterVolumeSpecName: "config-data") pod "2902e7f1-6f1b-4b67-a9fa-fd031a961900" (UID: "2902e7f1-6f1b-4b67-a9fa-fd031a961900"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.650046 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.650078 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bjm8\" (UniqueName: \"kubernetes.io/projected/2902e7f1-6f1b-4b67-a9fa-fd031a961900-kube-api-access-9bjm8\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.650093 4833 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.650102 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:41 crc kubenswrapper[4833]: I0219 13:05:41.650111 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2902e7f1-6f1b-4b67-a9fa-fd031a961900-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:41 crc kubenswrapper[4833]: E0219 13:05:41.660398 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.027232 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dsz68" event={"ID":"2902e7f1-6f1b-4b67-a9fa-fd031a961900","Type":"ContainerDied","Data":"1ec4cbf7581a959707028b2706c3ca38c637a925059150fd741b33a78020f7b9"} Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.027277 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dsz68" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.027299 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ec4cbf7581a959707028b2706c3ca38c637a925059150fd741b33a78020f7b9" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.030333 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef3a268-01cc-4ba4-b7cc-628bb6328271","Type":"ContainerStarted","Data":"4e3c744ff86343470fce04f27de91d43abf5f1ee2bbeeba6e0a9a1ec38048e1c"} Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.030597 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="ceilometer-notification-agent" containerID="cri-o://5b6d9758d28dcd5c467432f262b83933bd6760d53adeea57286377fdcc71b9ca" gracePeriod=30 Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.031025 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.031908 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="proxy-httpd" containerID="cri-o://4e3c744ff86343470fce04f27de91d43abf5f1ee2bbeeba6e0a9a1ec38048e1c" gracePeriod=30 Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.032014 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="sg-core" containerID="cri-o://cd687e064cb7f68bc62094b50ffacef805b2ecc52b09791ac784529029214652" gracePeriod=30 Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.235297 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-546699bf4c-sqbpl"] Feb 19 13:05:42 crc kubenswrapper[4833]: E0219 13:05:42.236027 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902e7f1-6f1b-4b67-a9fa-fd031a961900" containerName="cinder-db-sync" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.236059 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902e7f1-6f1b-4b67-a9fa-fd031a961900" containerName="cinder-db-sync" Feb 19 13:05:42 crc kubenswrapper[4833]: E0219 13:05:42.236090 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca786541-c266-41b5-a91d-3d626d530b45" containerName="init" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.236097 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca786541-c266-41b5-a91d-3d626d530b45" containerName="init" Feb 19 13:05:42 crc kubenswrapper[4833]: E0219 13:05:42.236117 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca786541-c266-41b5-a91d-3d626d530b45" containerName="dnsmasq-dns" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.236124 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca786541-c266-41b5-a91d-3d626d530b45" containerName="dnsmasq-dns" Feb 19 13:05:42 crc kubenswrapper[4833]: E0219 13:05:42.236135 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5aed427-a4af-40b6-bd9c-10284e0935ce" containerName="barbican-db-sync" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.236141 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5aed427-a4af-40b6-bd9c-10284e0935ce" containerName="barbican-db-sync" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.236305 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca786541-c266-41b5-a91d-3d626d530b45" containerName="dnsmasq-dns" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.236319 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5aed427-a4af-40b6-bd9c-10284e0935ce" containerName="barbican-db-sync" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.236329 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2902e7f1-6f1b-4b67-a9fa-fd031a961900" containerName="cinder-db-sync" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.237286 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.247763 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.248939 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.311030 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-546699bf4c-sqbpl"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.332290 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-56b45cfbd8-5dfjr"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.348103 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tl7wh" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.350686 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.367131 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6595e595-9cbc-44bb-8629-a53da3b75bd6-combined-ca-bundle\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.371821 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6595e595-9cbc-44bb-8629-a53da3b75bd6-logs\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.372079 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2q4\" (UniqueName: \"kubernetes.io/projected/6595e595-9cbc-44bb-8629-a53da3b75bd6-kube-api-access-md2q4\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.372314 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6595e595-9cbc-44bb-8629-a53da3b75bd6-config-data\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.372518 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6595e595-9cbc-44bb-8629-a53da3b75bd6-config-data-custom\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.372938 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.394121 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56b45cfbd8-5dfjr"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.475727 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md2q4\" (UniqueName: \"kubernetes.io/projected/6595e595-9cbc-44bb-8629-a53da3b75bd6-kube-api-access-md2q4\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.475781 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30e4d86-a08b-4021-8c83-3fb5abe86152-config-data\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.475807 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30e4d86-a08b-4021-8c83-3fb5abe86152-logs\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.475874 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6595e595-9cbc-44bb-8629-a53da3b75bd6-config-data\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.475922 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chf6w\" (UniqueName: \"kubernetes.io/projected/f30e4d86-a08b-4021-8c83-3fb5abe86152-kube-api-access-chf6w\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.475957 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6595e595-9cbc-44bb-8629-a53da3b75bd6-config-data-custom\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.476006 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f30e4d86-a08b-4021-8c83-3fb5abe86152-config-data-custom\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.476025 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6595e595-9cbc-44bb-8629-a53da3b75bd6-combined-ca-bundle\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.476047 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30e4d86-a08b-4021-8c83-3fb5abe86152-combined-ca-bundle\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.476067 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6595e595-9cbc-44bb-8629-a53da3b75bd6-logs\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.476535 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6595e595-9cbc-44bb-8629-a53da3b75bd6-logs\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.484659 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6595e595-9cbc-44bb-8629-a53da3b75bd6-combined-ca-bundle\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.493916 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6595e595-9cbc-44bb-8629-a53da3b75bd6-config-data\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.494006 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-t5f5g"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.495564 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.503722 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md2q4\" (UniqueName: \"kubernetes.io/projected/6595e595-9cbc-44bb-8629-a53da3b75bd6-kube-api-access-md2q4\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.517211 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6595e595-9cbc-44bb-8629-a53da3b75bd6-config-data-custom\") pod \"barbican-worker-546699bf4c-sqbpl\" (UID: \"6595e595-9cbc-44bb-8629-a53da3b75bd6\") " pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.535635 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-t5f5g"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.561162 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-546699bf4c-sqbpl" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.579401 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30e4d86-a08b-4021-8c83-3fb5abe86152-config-data\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.579442 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30e4d86-a08b-4021-8c83-3fb5abe86152-logs\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.579469 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfqs\" (UniqueName: \"kubernetes.io/projected/2dbb3c56-c8bf-4c87-a68d-f158b52467da-kube-api-access-5hfqs\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.579487 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-config\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.579540 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.579569 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.579601 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chf6w\" (UniqueName: \"kubernetes.io/projected/f30e4d86-a08b-4021-8c83-3fb5abe86152-kube-api-access-chf6w\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.579637 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.579656 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.579688 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f30e4d86-a08b-4021-8c83-3fb5abe86152-config-data-custom\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.579718 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30e4d86-a08b-4021-8c83-3fb5abe86152-combined-ca-bundle\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.585804 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f30e4d86-a08b-4021-8c83-3fb5abe86152-combined-ca-bundle\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.586560 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f30e4d86-a08b-4021-8c83-3fb5abe86152-logs\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.594487 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f30e4d86-a08b-4021-8c83-3fb5abe86152-config-data-custom\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.597769 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f30e4d86-a08b-4021-8c83-3fb5abe86152-config-data\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.628220 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chf6w\" (UniqueName: \"kubernetes.io/projected/f30e4d86-a08b-4021-8c83-3fb5abe86152-kube-api-access-chf6w\") pod \"barbican-keystone-listener-56b45cfbd8-5dfjr\" (UID: \"f30e4d86-a08b-4021-8c83-3fb5abe86152\") " pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.654646 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-64db8648b4-fbc89"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.656187 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.659213 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.669339 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64db8648b4-fbc89"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.681303 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hfqs\" (UniqueName: \"kubernetes.io/projected/2dbb3c56-c8bf-4c87-a68d-f158b52467da-kube-api-access-5hfqs\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.682170 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-config\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.681355 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-config\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.682512 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.682554 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.682604 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.682632 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.683245 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.683804 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.685470 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.687305 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.694196 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.712734 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hfqs\" (UniqueName: \"kubernetes.io/projected/2dbb3c56-c8bf-4c87-a68d-f158b52467da-kube-api-access-5hfqs\") pod \"dnsmasq-dns-848cf88cfc-t5f5g\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.784577 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-combined-ca-bundle\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.784924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data-custom\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.784953 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.785040 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dbm4\" (UniqueName: \"kubernetes.io/projected/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-kube-api-access-8dbm4\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.785074 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-logs\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.829993 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.831342 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.843343 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.843551 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.843637 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.843739 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-dcbpv" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.847196 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.896513 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dbm4\" (UniqueName: \"kubernetes.io/projected/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-kube-api-access-8dbm4\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.896679 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-logs\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.896767 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-combined-ca-bundle\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.896880 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data-custom\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.896927 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.902179 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-logs\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.902589 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.911895 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-t5f5g"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.913022 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-combined-ca-bundle\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.913411 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.917115 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data-custom\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.954137 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dbm4\" (UniqueName: \"kubernetes.io/projected/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-kube-api-access-8dbm4\") pod \"barbican-api-64db8648b4-fbc89\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.964557 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-scljw"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.966049 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.967435 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-scljw"] Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.998149 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.998189 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.998234 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-scripts\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.998293 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4616286-268c-4a6e-9542-699a0da55dbd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.998344 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55cnf\" (UniqueName: \"kubernetes.io/projected/c4616286-268c-4a6e-9542-699a0da55dbd-kube-api-access-55cnf\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:42 crc kubenswrapper[4833]: I0219 13:05:42.998362 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.053649 4833 generic.go:334] "Generic (PLEG): container finished" podID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerID="4e3c744ff86343470fce04f27de91d43abf5f1ee2bbeeba6e0a9a1ec38048e1c" exitCode=0 Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.053675 4833 generic.go:334] "Generic (PLEG): container finished" podID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerID="cd687e064cb7f68bc62094b50ffacef805b2ecc52b09791ac784529029214652" exitCode=2 Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.053684 4833 generic.go:334] "Generic (PLEG): container finished" podID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerID="5b6d9758d28dcd5c467432f262b83933bd6760d53adeea57286377fdcc71b9ca" exitCode=0 Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.053702 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef3a268-01cc-4ba4-b7cc-628bb6328271","Type":"ContainerDied","Data":"4e3c744ff86343470fce04f27de91d43abf5f1ee2bbeeba6e0a9a1ec38048e1c"} Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.053726 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef3a268-01cc-4ba4-b7cc-628bb6328271","Type":"ContainerDied","Data":"cd687e064cb7f68bc62094b50ffacef805b2ecc52b09791ac784529029214652"} Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.053737 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef3a268-01cc-4ba4-b7cc-628bb6328271","Type":"ContainerDied","Data":"5b6d9758d28dcd5c467432f262b83933bd6760d53adeea57286377fdcc71b9ca"} Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100084 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55cnf\" (UniqueName: \"kubernetes.io/projected/c4616286-268c-4a6e-9542-699a0da55dbd-kube-api-access-55cnf\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100129 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100173 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100193 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100226 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-config\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100280 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100316 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100336 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-svc\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100362 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-scripts\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100384 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g89ds\" (UniqueName: \"kubernetes.io/projected/1dcaf371-d64d-457f-859b-7815899d3450-kube-api-access-g89ds\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100432 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4616286-268c-4a6e-9542-699a0da55dbd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.100531 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4616286-268c-4a6e-9542-699a0da55dbd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.104617 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.107352 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-scripts\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.110112 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.110571 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.124365 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.126099 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.128389 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55cnf\" (UniqueName: \"kubernetes.io/projected/c4616286-268c-4a6e-9542-699a0da55dbd-kube-api-access-55cnf\") pod \"cinder-scheduler-0\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.128506 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.130287 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.171034 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.201911 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g89ds\" (UniqueName: \"kubernetes.io/projected/1dcaf371-d64d-457f-859b-7815899d3450-kube-api-access-g89ds\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.201956 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g276q\" (UniqueName: \"kubernetes.io/projected/cd619681-d181-4b2b-ae1e-0d41dc9d672e-kube-api-access-g276q\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.202005 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd619681-d181-4b2b-ae1e-0d41dc9d672e-logs\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.202039 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.202063 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-scripts\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.202117 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.202133 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.202179 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-config\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.202202 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data-custom\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.202240 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd619681-d181-4b2b-ae1e-0d41dc9d672e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.202337 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.202406 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.202428 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-svc\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.205069 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-546699bf4c-sqbpl"] Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.206463 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.207318 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-svc\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.207991 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.208140 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.208211 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-config\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.218159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g89ds\" (UniqueName: \"kubernetes.io/projected/1dcaf371-d64d-457f-859b-7815899d3450-kube-api-access-g89ds\") pod \"dnsmasq-dns-6578955fd5-scljw\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.290732 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.342115 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-sg-core-conf-yaml\") pod \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.342243 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lngh\" (UniqueName: \"kubernetes.io/projected/2ef3a268-01cc-4ba4-b7cc-628bb6328271-kube-api-access-7lngh\") pod \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.342397 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-run-httpd\") pod \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.342482 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-scripts\") pod \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.342646 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-config-data\") pod \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.342668 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-log-httpd\") pod \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.342994 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2ef3a268-01cc-4ba4-b7cc-628bb6328271" (UID: "2ef3a268-01cc-4ba4-b7cc-628bb6328271"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.343036 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd619681-d181-4b2b-ae1e-0d41dc9d672e-logs\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.343119 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.343174 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-scripts\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.343273 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data-custom\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.343302 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd619681-d181-4b2b-ae1e-0d41dc9d672e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.343362 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.343522 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g276q\" (UniqueName: \"kubernetes.io/projected/cd619681-d181-4b2b-ae1e-0d41dc9d672e-kube-api-access-g276q\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.343597 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.343681 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2ef3a268-01cc-4ba4-b7cc-628bb6328271" (UID: "2ef3a268-01cc-4ba4-b7cc-628bb6328271"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.343760 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd619681-d181-4b2b-ae1e-0d41dc9d672e-etc-machine-id\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.344084 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd619681-d181-4b2b-ae1e-0d41dc9d672e-logs\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.346820 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-scripts" (OuterVolumeSpecName: "scripts") pod "2ef3a268-01cc-4ba4-b7cc-628bb6328271" (UID: "2ef3a268-01cc-4ba4-b7cc-628bb6328271"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.347901 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.350615 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-scripts\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.352408 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef3a268-01cc-4ba4-b7cc-628bb6328271-kube-api-access-7lngh" (OuterVolumeSpecName: "kube-api-access-7lngh") pod "2ef3a268-01cc-4ba4-b7cc-628bb6328271" (UID: "2ef3a268-01cc-4ba4-b7cc-628bb6328271"). InnerVolumeSpecName "kube-api-access-7lngh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.356071 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data-custom\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.359077 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.366069 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.377006 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.384917 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2ef3a268-01cc-4ba4-b7cc-628bb6328271" (UID: "2ef3a268-01cc-4ba4-b7cc-628bb6328271"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.388183 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g276q\" (UniqueName: \"kubernetes.io/projected/cd619681-d181-4b2b-ae1e-0d41dc9d672e-kube-api-access-g276q\") pod \"cinder-api-0\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.434102 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-config-data" (OuterVolumeSpecName: "config-data") pod "2ef3a268-01cc-4ba4-b7cc-628bb6328271" (UID: "2ef3a268-01cc-4ba4-b7cc-628bb6328271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.445738 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-combined-ca-bundle\") pod \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\" (UID: \"2ef3a268-01cc-4ba4-b7cc-628bb6328271\") " Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.446202 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.446216 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.446225 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef3a268-01cc-4ba4-b7cc-628bb6328271-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.446233 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.446244 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lngh\" (UniqueName: \"kubernetes.io/projected/2ef3a268-01cc-4ba4-b7cc-628bb6328271-kube-api-access-7lngh\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.450251 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.503566 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ef3a268-01cc-4ba4-b7cc-628bb6328271" (UID: "2ef3a268-01cc-4ba4-b7cc-628bb6328271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.514730 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-t5f5g"] Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.525591 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56b45cfbd8-5dfjr"] Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.548466 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef3a268-01cc-4ba4-b7cc-628bb6328271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.719611 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64db8648b4-fbc89"] Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.875833 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:05:43 crc kubenswrapper[4833]: I0219 13:05:43.930566 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-scljw"] Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.034150 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.070936 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef3a268-01cc-4ba4-b7cc-628bb6328271","Type":"ContainerDied","Data":"a3751e65c5cf86eba6f25c03cd28dc301d24a46311c5ab242db64a37ac46ac1e"} Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.071070 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.072407 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4616286-268c-4a6e-9542-699a0da55dbd","Type":"ContainerStarted","Data":"46cc0652b0bfa3a269efb79fe7f20e46db12b27a7504b7dea07a5bc246d44454"} Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.072434 4833 scope.go:117] "RemoveContainer" containerID="4e3c744ff86343470fce04f27de91d43abf5f1ee2bbeeba6e0a9a1ec38048e1c" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.074798 4833 generic.go:334] "Generic (PLEG): container finished" podID="2dbb3c56-c8bf-4c87-a68d-f158b52467da" containerID="a1843d1463f7c0d69e5675e8ab4f6de296a7983b29f1911e2ab7b6919817f763" exitCode=0 Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.074842 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" event={"ID":"2dbb3c56-c8bf-4c87-a68d-f158b52467da","Type":"ContainerDied","Data":"a1843d1463f7c0d69e5675e8ab4f6de296a7983b29f1911e2ab7b6919817f763"} Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.075088 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" event={"ID":"2dbb3c56-c8bf-4c87-a68d-f158b52467da","Type":"ContainerStarted","Data":"ac08145baf8fa0adebaa8d73ec8c16720c5fb77f4db7fcacc3671cccc6883864"} Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.080672 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" event={"ID":"f30e4d86-a08b-4021-8c83-3fb5abe86152","Type":"ContainerStarted","Data":"b60bd31694f5acaab50e493590df52e81117f1c0994f8a8dc22da412d4c23d70"} Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.081941 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-546699bf4c-sqbpl" event={"ID":"6595e595-9cbc-44bb-8629-a53da3b75bd6","Type":"ContainerStarted","Data":"d9d4f577bf44754c05dadd30add2f55d310295228207aaf11b8ed6a442c5f252"} Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.083521 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd619681-d181-4b2b-ae1e-0d41dc9d672e","Type":"ContainerStarted","Data":"a4ca29c3a004c538d8c5d400b418eb47a2e555b3b1f18740e0ee7b3b35f830ea"} Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.084191 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-scljw" event={"ID":"1dcaf371-d64d-457f-859b-7815899d3450","Type":"ContainerStarted","Data":"7cac59d2f9f4b70c3e7595fd5250a17042a91b2acc41ca06b97dfa3f76368b0e"} Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.084907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64db8648b4-fbc89" event={"ID":"ca2586c3-a9f8-45ed-a4e6-406a64304f7d","Type":"ContainerStarted","Data":"dea60f8d6a0dca4e1b62704d4d6feea53d6d0d8b7a8bce92aee1aad543d6e53e"} Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.108984 4833 scope.go:117] "RemoveContainer" containerID="cd687e064cb7f68bc62094b50ffacef805b2ecc52b09791ac784529029214652" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.131394 4833 scope.go:117] "RemoveContainer" containerID="5b6d9758d28dcd5c467432f262b83933bd6760d53adeea57286377fdcc71b9ca" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.161687 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.174713 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.190415 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:05:44 crc kubenswrapper[4833]: E0219 13:05:44.190792 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="ceilometer-notification-agent" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.190804 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="ceilometer-notification-agent" Feb 19 13:05:44 crc kubenswrapper[4833]: E0219 13:05:44.190828 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="sg-core" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.190834 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="sg-core" Feb 19 13:05:44 crc kubenswrapper[4833]: E0219 13:05:44.190859 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="proxy-httpd" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.190865 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="proxy-httpd" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.191036 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="sg-core" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.191057 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="proxy-httpd" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.191067 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" containerName="ceilometer-notification-agent" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.194980 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.198755 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.198847 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.200586 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.333913 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef3a268-01cc-4ba4-b7cc-628bb6328271" path="/var/lib/kubelet/pods/2ef3a268-01cc-4ba4-b7cc-628bb6328271/volumes" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.378357 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-scripts\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.378502 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-log-httpd\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.378747 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-config-data\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.378849 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-run-httpd\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.378875 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.379006 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sr9z\" (UniqueName: \"kubernetes.io/projected/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-kube-api-access-7sr9z\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.379036 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.391011 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.480726 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-config-data\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.480776 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-run-httpd\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.480800 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.480835 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sr9z\" (UniqueName: \"kubernetes.io/projected/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-kube-api-access-7sr9z\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.480854 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.480885 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-scripts\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.480944 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-log-httpd\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.481376 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-log-httpd\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.481381 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-run-httpd\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.485406 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-scripts\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.486484 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-config-data\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.488792 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.489073 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.501204 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sr9z\" (UniqueName: \"kubernetes.io/projected/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-kube-api-access-7sr9z\") pod \"ceilometer-0\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.525121 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.582538 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-sb\") pod \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.582706 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hfqs\" (UniqueName: \"kubernetes.io/projected/2dbb3c56-c8bf-4c87-a68d-f158b52467da-kube-api-access-5hfqs\") pod \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.582768 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-svc\") pod \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.582918 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-config\") pod \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.583046 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-nb\") pod \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.583105 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-swift-storage-0\") pod \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\" (UID: \"2dbb3c56-c8bf-4c87-a68d-f158b52467da\") " Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.598166 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dbb3c56-c8bf-4c87-a68d-f158b52467da-kube-api-access-5hfqs" (OuterVolumeSpecName: "kube-api-access-5hfqs") pod "2dbb3c56-c8bf-4c87-a68d-f158b52467da" (UID: "2dbb3c56-c8bf-4c87-a68d-f158b52467da"). InnerVolumeSpecName "kube-api-access-5hfqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.611294 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2dbb3c56-c8bf-4c87-a68d-f158b52467da" (UID: "2dbb3c56-c8bf-4c87-a68d-f158b52467da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.611926 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2dbb3c56-c8bf-4c87-a68d-f158b52467da" (UID: "2dbb3c56-c8bf-4c87-a68d-f158b52467da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.612012 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2dbb3c56-c8bf-4c87-a68d-f158b52467da" (UID: "2dbb3c56-c8bf-4c87-a68d-f158b52467da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.625212 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-config" (OuterVolumeSpecName: "config") pod "2dbb3c56-c8bf-4c87-a68d-f158b52467da" (UID: "2dbb3c56-c8bf-4c87-a68d-f158b52467da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.633055 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2dbb3c56-c8bf-4c87-a68d-f158b52467da" (UID: "2dbb3c56-c8bf-4c87-a68d-f158b52467da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.634935 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.687645 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.689145 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.689172 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.689182 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.689192 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hfqs\" (UniqueName: \"kubernetes.io/projected/2dbb3c56-c8bf-4c87-a68d-f158b52467da-kube-api-access-5hfqs\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:44 crc kubenswrapper[4833]: I0219 13:05:44.689203 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2dbb3c56-c8bf-4c87-a68d-f158b52467da-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.075606 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.097133 4833 generic.go:334] "Generic (PLEG): container finished" podID="1dcaf371-d64d-457f-859b-7815899d3450" containerID="2db8f9265dee8cd82a63837bc7e389bdad76a4fb25b9d7a0515902af0499ef33" exitCode=0 Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.097201 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-scljw" event={"ID":"1dcaf371-d64d-457f-859b-7815899d3450","Type":"ContainerDied","Data":"2db8f9265dee8cd82a63837bc7e389bdad76a4fb25b9d7a0515902af0499ef33"} Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.102489 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64db8648b4-fbc89" event={"ID":"ca2586c3-a9f8-45ed-a4e6-406a64304f7d","Type":"ContainerStarted","Data":"1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423"} Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.102545 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64db8648b4-fbc89" event={"ID":"ca2586c3-a9f8-45ed-a4e6-406a64304f7d","Type":"ContainerStarted","Data":"50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e"} Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.106606 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.106673 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.106672 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.108761 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-t5f5g" event={"ID":"2dbb3c56-c8bf-4c87-a68d-f158b52467da","Type":"ContainerDied","Data":"ac08145baf8fa0adebaa8d73ec8c16720c5fb77f4db7fcacc3671cccc6883864"} Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.108790 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd619681-d181-4b2b-ae1e-0d41dc9d672e","Type":"ContainerStarted","Data":"c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061"} Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.108812 4833 scope.go:117] "RemoveContainer" containerID="a1843d1463f7c0d69e5675e8ab4f6de296a7983b29f1911e2ab7b6919817f763" Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.152305 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-64db8648b4-fbc89" podStartSLOduration=3.152287165 podStartE2EDuration="3.152287165s" podCreationTimestamp="2026-02-19 13:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:45.144211721 +0000 UTC m=+1155.539730499" watchObservedRunningTime="2026-02-19 13:05:45.152287165 +0000 UTC m=+1155.547805933" Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.183530 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-t5f5g"] Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.197717 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-t5f5g"] Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.744938 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:05:45 crc kubenswrapper[4833]: I0219 13:05:45.745350 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:05:46 crc kubenswrapper[4833]: I0219 13:05:46.115594 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8","Type":"ContainerStarted","Data":"ec4178f52b38ea782ec1c67e0b3c7e768a9c579c4e9577dddbae08cb9e136105"} Feb 19 13:05:46 crc kubenswrapper[4833]: I0219 13:05:46.371037 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dbb3c56-c8bf-4c87-a68d-f158b52467da" path="/var/lib/kubelet/pods/2dbb3c56-c8bf-4c87-a68d-f158b52467da/volumes" Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.131395 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8","Type":"ContainerStarted","Data":"6f784bd68afc71d989886878e0c43b47e29ee1433e0c84dd8daaca1d15bf022e"} Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.133821 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-546699bf4c-sqbpl" event={"ID":"6595e595-9cbc-44bb-8629-a53da3b75bd6","Type":"ContainerStarted","Data":"07e570ac2f94ad571647b3210de6306289600934be21c4e44a8271b4d2bdbdd8"} Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.133850 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-546699bf4c-sqbpl" event={"ID":"6595e595-9cbc-44bb-8629-a53da3b75bd6","Type":"ContainerStarted","Data":"edabcffacaa8b643380948ff66f677341827d32bd3688872d795dbec3171f994"} Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.135960 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd619681-d181-4b2b-ae1e-0d41dc9d672e","Type":"ContainerStarted","Data":"98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464"} Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.136092 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cd619681-d181-4b2b-ae1e-0d41dc9d672e" containerName="cinder-api-log" containerID="cri-o://c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061" gracePeriod=30 Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.136134 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.136134 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="cd619681-d181-4b2b-ae1e-0d41dc9d672e" containerName="cinder-api" containerID="cri-o://98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464" gracePeriod=30 Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.138585 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-scljw" event={"ID":"1dcaf371-d64d-457f-859b-7815899d3450","Type":"ContainerStarted","Data":"110c9ce2e716acbaa3c882a3a8195dd2a30b6cd06773f6928b346706c8abb09a"} Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.138720 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.147136 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4616286-268c-4a6e-9542-699a0da55dbd","Type":"ContainerStarted","Data":"11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89"} Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.147190 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4616286-268c-4a6e-9542-699a0da55dbd","Type":"ContainerStarted","Data":"dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de"} Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.149978 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" event={"ID":"f30e4d86-a08b-4021-8c83-3fb5abe86152","Type":"ContainerStarted","Data":"e8be94b80f3cfb29593d3074f002404a9cf6dd878a02cc94c1d88101bba5fa9e"} Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.150051 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" event={"ID":"f30e4d86-a08b-4021-8c83-3fb5abe86152","Type":"ContainerStarted","Data":"186574d1ab62442ecf475a0bd5b1523aec4c1baa86db06935e5bdde784ad0e84"} Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.161268 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-546699bf4c-sqbpl" podStartSLOduration=2.842172882 podStartE2EDuration="5.161250235s" podCreationTimestamp="2026-02-19 13:05:42 +0000 UTC" firstStartedPulling="2026-02-19 13:05:43.240665437 +0000 UTC m=+1153.636184205" lastFinishedPulling="2026-02-19 13:05:45.55974275 +0000 UTC m=+1155.955261558" observedRunningTime="2026-02-19 13:05:47.155925857 +0000 UTC m=+1157.551444625" watchObservedRunningTime="2026-02-19 13:05:47.161250235 +0000 UTC m=+1157.556769003" Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.183824 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.492349208 podStartE2EDuration="5.183800301s" podCreationTimestamp="2026-02-19 13:05:42 +0000 UTC" firstStartedPulling="2026-02-19 13:05:43.882841361 +0000 UTC m=+1154.278360129" lastFinishedPulling="2026-02-19 13:05:45.574292454 +0000 UTC m=+1155.969811222" observedRunningTime="2026-02-19 13:05:47.179177343 +0000 UTC m=+1157.574696111" watchObservedRunningTime="2026-02-19 13:05:47.183800301 +0000 UTC m=+1157.579319069" Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.205454 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-scljw" podStartSLOduration=5.205430372 podStartE2EDuration="5.205430372s" podCreationTimestamp="2026-02-19 13:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:47.201144143 +0000 UTC m=+1157.596662911" watchObservedRunningTime="2026-02-19 13:05:47.205430372 +0000 UTC m=+1157.600949140" Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.231782 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.231761213 podStartE2EDuration="4.231761213s" podCreationTimestamp="2026-02-19 13:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:47.228822102 +0000 UTC m=+1157.624340870" watchObservedRunningTime="2026-02-19 13:05:47.231761213 +0000 UTC m=+1157.627279991" Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.246245 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-56b45cfbd8-5dfjr" podStartSLOduration=3.22593162 podStartE2EDuration="5.246225095s" podCreationTimestamp="2026-02-19 13:05:42 +0000 UTC" firstStartedPulling="2026-02-19 13:05:43.540564776 +0000 UTC m=+1153.936083544" lastFinishedPulling="2026-02-19 13:05:45.560858251 +0000 UTC m=+1155.956377019" observedRunningTime="2026-02-19 13:05:47.243383866 +0000 UTC m=+1157.638902644" watchObservedRunningTime="2026-02-19 13:05:47.246225095 +0000 UTC m=+1157.641743863" Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.930164 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.987858 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data-custom\") pod \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.987906 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd619681-d181-4b2b-ae1e-0d41dc9d672e-logs\") pod \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.987964 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g276q\" (UniqueName: \"kubernetes.io/projected/cd619681-d181-4b2b-ae1e-0d41dc9d672e-kube-api-access-g276q\") pod \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.987995 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data\") pod \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.988200 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd619681-d181-4b2b-ae1e-0d41dc9d672e-etc-machine-id\") pod \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.988226 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-scripts\") pod \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.988249 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-combined-ca-bundle\") pod \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\" (UID: \"cd619681-d181-4b2b-ae1e-0d41dc9d672e\") " Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.989676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd619681-d181-4b2b-ae1e-0d41dc9d672e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cd619681-d181-4b2b-ae1e-0d41dc9d672e" (UID: "cd619681-d181-4b2b-ae1e-0d41dc9d672e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:05:47 crc kubenswrapper[4833]: I0219 13:05:47.990017 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd619681-d181-4b2b-ae1e-0d41dc9d672e-logs" (OuterVolumeSpecName: "logs") pod "cd619681-d181-4b2b-ae1e-0d41dc9d672e" (UID: "cd619681-d181-4b2b-ae1e-0d41dc9d672e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.013951 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd619681-d181-4b2b-ae1e-0d41dc9d672e-kube-api-access-g276q" (OuterVolumeSpecName: "kube-api-access-g276q") pod "cd619681-d181-4b2b-ae1e-0d41dc9d672e" (UID: "cd619681-d181-4b2b-ae1e-0d41dc9d672e"). InnerVolumeSpecName "kube-api-access-g276q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.013959 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-scripts" (OuterVolumeSpecName: "scripts") pod "cd619681-d181-4b2b-ae1e-0d41dc9d672e" (UID: "cd619681-d181-4b2b-ae1e-0d41dc9d672e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.022588 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cd619681-d181-4b2b-ae1e-0d41dc9d672e" (UID: "cd619681-d181-4b2b-ae1e-0d41dc9d672e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.050079 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd619681-d181-4b2b-ae1e-0d41dc9d672e" (UID: "cd619681-d181-4b2b-ae1e-0d41dc9d672e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.086682 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data" (OuterVolumeSpecName: "config-data") pod "cd619681-d181-4b2b-ae1e-0d41dc9d672e" (UID: "cd619681-d181-4b2b-ae1e-0d41dc9d672e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.107905 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g276q\" (UniqueName: \"kubernetes.io/projected/cd619681-d181-4b2b-ae1e-0d41dc9d672e-kube-api-access-g276q\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.107946 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.107962 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cd619681-d181-4b2b-ae1e-0d41dc9d672e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.107971 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.107981 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.107991 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd619681-d181-4b2b-ae1e-0d41dc9d672e-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.108036 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd619681-d181-4b2b-ae1e-0d41dc9d672e-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.167860 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd619681-d181-4b2b-ae1e-0d41dc9d672e" containerID="98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464" exitCode=0 Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.168143 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd619681-d181-4b2b-ae1e-0d41dc9d672e","Type":"ContainerDied","Data":"98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464"} Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.168164 4833 generic.go:334] "Generic (PLEG): container finished" podID="cd619681-d181-4b2b-ae1e-0d41dc9d672e" containerID="c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061" exitCode=143 Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.168189 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd619681-d181-4b2b-ae1e-0d41dc9d672e","Type":"ContainerDied","Data":"c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061"} Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.168205 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"cd619681-d181-4b2b-ae1e-0d41dc9d672e","Type":"ContainerDied","Data":"a4ca29c3a004c538d8c5d400b418eb47a2e555b3b1f18740e0ee7b3b35f830ea"} Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.168224 4833 scope.go:117] "RemoveContainer" containerID="98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.168117 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.180919 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8","Type":"ContainerStarted","Data":"d1cd809acdad1a7ca6ab899ee8581bdb370ab1466b0d8dfd3345f39155c51d86"} Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.211860 4833 scope.go:117] "RemoveContainer" containerID="c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.248565 4833 scope.go:117] "RemoveContainer" containerID="98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.252283 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:05:48 crc kubenswrapper[4833]: E0219 13:05:48.253142 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464\": container with ID starting with 98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464 not found: ID does not exist" containerID="98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.253191 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464"} err="failed to get container status \"98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464\": rpc error: code = NotFound desc = could not find container \"98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464\": container with ID starting with 98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464 not found: ID does not exist" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.253222 4833 scope.go:117] "RemoveContainer" containerID="c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061" Feb 19 13:05:48 crc kubenswrapper[4833]: E0219 13:05:48.253488 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061\": container with ID starting with c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061 not found: ID does not exist" containerID="c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.253569 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061"} err="failed to get container status \"c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061\": rpc error: code = NotFound desc = could not find container \"c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061\": container with ID starting with c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061 not found: ID does not exist" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.253591 4833 scope.go:117] "RemoveContainer" containerID="98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.253780 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464"} err="failed to get container status \"98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464\": rpc error: code = NotFound desc = could not find container \"98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464\": container with ID starting with 98146cc2b590e3f9812b2d3841df103ea1e0567c70e8e60775d45a4c62b5f464 not found: ID does not exist" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.253806 4833 scope.go:117] "RemoveContainer" containerID="c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.254020 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061"} err="failed to get container status \"c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061\": rpc error: code = NotFound desc = could not find container \"c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061\": container with ID starting with c42e900b3a41f3807570f8430d701ca47679046cf59f0ef7e4c24742f39b3061 not found: ID does not exist" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.264603 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.274366 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:05:48 crc kubenswrapper[4833]: E0219 13:05:48.274852 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd619681-d181-4b2b-ae1e-0d41dc9d672e" containerName="cinder-api" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.274869 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd619681-d181-4b2b-ae1e-0d41dc9d672e" containerName="cinder-api" Feb 19 13:05:48 crc kubenswrapper[4833]: E0219 13:05:48.274895 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbb3c56-c8bf-4c87-a68d-f158b52467da" containerName="init" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.274901 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbb3c56-c8bf-4c87-a68d-f158b52467da" containerName="init" Feb 19 13:05:48 crc kubenswrapper[4833]: E0219 13:05:48.274918 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd619681-d181-4b2b-ae1e-0d41dc9d672e" containerName="cinder-api-log" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.274924 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd619681-d181-4b2b-ae1e-0d41dc9d672e" containerName="cinder-api-log" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.275093 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd619681-d181-4b2b-ae1e-0d41dc9d672e" containerName="cinder-api" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.275111 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd619681-d181-4b2b-ae1e-0d41dc9d672e" containerName="cinder-api-log" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.275124 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbb3c56-c8bf-4c87-a68d-f158b52467da" containerName="init" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.276086 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.279352 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.279607 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.279735 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.292124 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.332888 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd619681-d181-4b2b-ae1e-0d41dc9d672e" path="/var/lib/kubelet/pods/cd619681-d181-4b2b-ae1e-0d41dc9d672e/volumes" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.369019 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.423959 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-config-data\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.424236 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgxh5\" (UniqueName: \"kubernetes.io/projected/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-kube-api-access-wgxh5\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.424349 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.424427 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.424515 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.424654 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-logs\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.424760 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.424831 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.424895 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-scripts\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.526174 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgxh5\" (UniqueName: \"kubernetes.io/projected/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-kube-api-access-wgxh5\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.526239 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.526261 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.526280 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.526312 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-logs\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.526341 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.526354 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.526369 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-scripts\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.526427 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-config-data\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.526815 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.527069 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-logs\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.533433 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-public-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.533743 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.534342 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-config-data-custom\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.534443 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-config-data\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.535006 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.536717 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-scripts\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.543669 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgxh5\" (UniqueName: \"kubernetes.io/projected/8ffac1cb-2dd7-4ff9-92e1-a41a23411f57-kube-api-access-wgxh5\") pod \"cinder-api-0\" (UID: \"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57\") " pod="openstack/cinder-api-0" Feb 19 13:05:48 crc kubenswrapper[4833]: I0219 13:05:48.660736 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.011307 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-8468b886b8-mz8xd"] Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.013024 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.015298 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.015313 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.032754 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8468b886b8-mz8xd"] Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.115101 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.138728 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-config-data\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.138953 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xldbd\" (UniqueName: \"kubernetes.io/projected/ad7485d9-4e14-49c1-bf60-8a0146d26df0-kube-api-access-xldbd\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.139052 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-config-data-custom\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.139299 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-combined-ca-bundle\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.139356 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-public-tls-certs\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.139395 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad7485d9-4e14-49c1-bf60-8a0146d26df0-logs\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.139701 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-internal-tls-certs\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.200468 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8","Type":"ContainerStarted","Data":"890084e668006b412cde3e4304a29578481436109e36e60e90cdb7a4e522ae15"} Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.202776 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57","Type":"ContainerStarted","Data":"ff16651d33c0b72ab721c4d380fbe430c86ee0e43c58159cb5aee1cbf8385b7d"} Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.241766 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-combined-ca-bundle\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.241814 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-public-tls-certs\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.241834 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad7485d9-4e14-49c1-bf60-8a0146d26df0-logs\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.241924 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-internal-tls-certs\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.241984 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-config-data\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.242008 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xldbd\" (UniqueName: \"kubernetes.io/projected/ad7485d9-4e14-49c1-bf60-8a0146d26df0-kube-api-access-xldbd\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.242030 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-config-data-custom\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.243273 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ad7485d9-4e14-49c1-bf60-8a0146d26df0-logs\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.248604 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-config-data\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.250175 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-internal-tls-certs\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.251013 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-config-data-custom\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.251873 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-combined-ca-bundle\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.252379 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad7485d9-4e14-49c1-bf60-8a0146d26df0-public-tls-certs\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.270289 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xldbd\" (UniqueName: \"kubernetes.io/projected/ad7485d9-4e14-49c1-bf60-8a0146d26df0-kube-api-access-xldbd\") pod \"barbican-api-8468b886b8-mz8xd\" (UID: \"ad7485d9-4e14-49c1-bf60-8a0146d26df0\") " pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.341317 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:49 crc kubenswrapper[4833]: W0219 13:05:49.644340 4833 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ef3a268_01cc_4ba4_b7cc_628bb6328271.slice/crio-conmon-4e3c744ff86343470fce04f27de91d43abf5f1ee2bbeeba6e0a9a1ec38048e1c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ef3a268_01cc_4ba4_b7cc_628bb6328271.slice/crio-conmon-4e3c744ff86343470fce04f27de91d43abf5f1ee2bbeeba6e0a9a1ec38048e1c.scope: no such file or directory Feb 19 13:05:49 crc kubenswrapper[4833]: W0219 13:05:49.645104 4833 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ef3a268_01cc_4ba4_b7cc_628bb6328271.slice/crio-4e3c744ff86343470fce04f27de91d43abf5f1ee2bbeeba6e0a9a1ec38048e1c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ef3a268_01cc_4ba4_b7cc_628bb6328271.slice/crio-4e3c744ff86343470fce04f27de91d43abf5f1ee2bbeeba6e0a9a1ec38048e1c.scope: no such file or directory Feb 19 13:05:49 crc kubenswrapper[4833]: W0219 13:05:49.645408 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ef3a268_01cc_4ba4_b7cc_628bb6328271.slice/crio-cd687e064cb7f68bc62094b50ffacef805b2ecc52b09791ac784529029214652.scope WatchSource:0}: Error finding container cd687e064cb7f68bc62094b50ffacef805b2ecc52b09791ac784529029214652: Status 404 returned error can't find the container with id cd687e064cb7f68bc62094b50ffacef805b2ecc52b09791ac784529029214652 Feb 19 13:05:49 crc kubenswrapper[4833]: W0219 13:05:49.664715 4833 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dbb3c56_c8bf_4c87_a68d_f158b52467da.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dbb3c56_c8bf_4c87_a68d_f158b52467da.slice: no such file or directory Feb 19 13:05:49 crc kubenswrapper[4833]: W0219 13:05:49.680955 4833 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd619681_d181_4b2b_ae1e_0d41dc9d672e.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd619681_d181_4b2b_ae1e_0d41dc9d672e.slice: no such file or directory Feb 19 13:05:49 crc kubenswrapper[4833]: E0219 13:05:49.889269 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ef3a268_01cc_4ba4_b7cc_628bb6328271.slice/crio-conmon-5b6d9758d28dcd5c467432f262b83933bd6760d53adeea57286377fdcc71b9ca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ef3a268_01cc_4ba4_b7cc_628bb6328271.slice/crio-5b6d9758d28dcd5c467432f262b83933bd6760d53adeea57286377fdcc71b9ca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ef3a268_01cc_4ba4_b7cc_628bb6328271.slice/crio-conmon-cd687e064cb7f68bc62094b50ffacef805b2ecc52b09791ac784529029214652.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2902e7f1_6f1b_4b67_a9fa_fd031a961900.slice/crio-1ec4cbf7581a959707028b2706c3ca38c637a925059150fd741b33a78020f7b9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod742dbb25_3782_45e5_931a_185f8a98d24d.slice/crio-3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5aed427_a4af_40b6_bd9c_10284e0935ce.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6ff486b_931e_4973_9bac_5d68a07e9991.slice/crio-conmon-3d70c6239ac05d67654bb3afebc44385962eceb9e68f31b0b04f547ba1eec565.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5aed427_a4af_40b6_bd9c_10284e0935ce.slice/crio-bcf7a4a5d4ff805093ad794ec8cd1f12a74d138f0fdb5168ecb9511e1da34332\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5aed427_a4af_40b6_bd9c_10284e0935ce.slice/crio-conmon-38b1634674e1c50f0e04e2286162bbeb49033885f7e7e96701a5067e935c944b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod742dbb25_3782_45e5_931a_185f8a98d24d.slice/crio-conmon-3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca786541_c266_41b5_a91d_3d626d530b45.slice/crio-0efb3353c1de49eca92c89a1e6fdac180547740ed2732551b5f7176f0c1fbafe\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6ff486b_931e_4973_9bac_5d68a07e9991.slice/crio-3d70c6239ac05d67654bb3afebc44385962eceb9e68f31b0b04f547ba1eec565.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2902e7f1_6f1b_4b67_a9fa_fd031a961900.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca786541_c266_41b5_a91d_3d626d530b45.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5aed427_a4af_40b6_bd9c_10284e0935ce.slice/crio-38b1634674e1c50f0e04e2286162bbeb49033885f7e7e96701a5067e935c944b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod742dbb25_3782_45e5_931a_185f8a98d24d.slice/crio-784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6ff486b_931e_4973_9bac_5d68a07e9991.slice/crio-d65311c3ccec03d5f9352f0d0b8a53a3e84a5e84ddb604a730b44b0e95c93070.scope\": RecentStats: unable to find data in memory cache]" Feb 19 13:05:49 crc kubenswrapper[4833]: I0219 13:05:49.891158 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-8468b886b8-mz8xd"] Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.092026 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.154857 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.166659 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742dbb25-3782-45e5-931a-185f8a98d24d-logs\") pod \"742dbb25-3782-45e5-931a-185f8a98d24d\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.166761 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/742dbb25-3782-45e5-931a-185f8a98d24d-horizon-secret-key\") pod \"742dbb25-3782-45e5-931a-185f8a98d24d\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.166845 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-config-data\") pod \"742dbb25-3782-45e5-931a-185f8a98d24d\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.166907 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px2tp\" (UniqueName: \"kubernetes.io/projected/742dbb25-3782-45e5-931a-185f8a98d24d-kube-api-access-px2tp\") pod \"742dbb25-3782-45e5-931a-185f8a98d24d\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.166947 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-scripts\") pod \"742dbb25-3782-45e5-931a-185f8a98d24d\" (UID: \"742dbb25-3782-45e5-931a-185f8a98d24d\") " Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.168702 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/742dbb25-3782-45e5-931a-185f8a98d24d-logs" (OuterVolumeSpecName: "logs") pod "742dbb25-3782-45e5-931a-185f8a98d24d" (UID: "742dbb25-3782-45e5-931a-185f8a98d24d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.178745 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742dbb25-3782-45e5-931a-185f8a98d24d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "742dbb25-3782-45e5-931a-185f8a98d24d" (UID: "742dbb25-3782-45e5-931a-185f8a98d24d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.180552 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742dbb25-3782-45e5-931a-185f8a98d24d-kube-api-access-px2tp" (OuterVolumeSpecName: "kube-api-access-px2tp") pod "742dbb25-3782-45e5-931a-185f8a98d24d" (UID: "742dbb25-3782-45e5-931a-185f8a98d24d"). InnerVolumeSpecName "kube-api-access-px2tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.222951 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-config-data" (OuterVolumeSpecName: "config-data") pod "742dbb25-3782-45e5-931a-185f8a98d24d" (UID: "742dbb25-3782-45e5-931a-185f8a98d24d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.244405 4833 generic.go:334] "Generic (PLEG): container finished" podID="a6ff486b-931e-4973-9bac-5d68a07e9991" containerID="3d70c6239ac05d67654bb3afebc44385962eceb9e68f31b0b04f547ba1eec565" exitCode=137 Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.244943 4833 generic.go:334] "Generic (PLEG): container finished" podID="a6ff486b-931e-4973-9bac-5d68a07e9991" containerID="d65311c3ccec03d5f9352f0d0b8a53a3e84a5e84ddb604a730b44b0e95c93070" exitCode=137 Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.244523 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7594f6fd59-hxk8t" event={"ID":"a6ff486b-931e-4973-9bac-5d68a07e9991","Type":"ContainerDied","Data":"3d70c6239ac05d67654bb3afebc44385962eceb9e68f31b0b04f547ba1eec565"} Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.245111 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7594f6fd59-hxk8t" event={"ID":"a6ff486b-931e-4973-9bac-5d68a07e9991","Type":"ContainerDied","Data":"d65311c3ccec03d5f9352f0d0b8a53a3e84a5e84ddb604a730b44b0e95c93070"} Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.250691 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-scripts" (OuterVolumeSpecName: "scripts") pod "742dbb25-3782-45e5-931a-185f8a98d24d" (UID: "742dbb25-3782-45e5-931a-185f8a98d24d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.252922 4833 generic.go:334] "Generic (PLEG): container finished" podID="742dbb25-3782-45e5-931a-185f8a98d24d" containerID="3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1" exitCode=137 Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.253098 4833 generic.go:334] "Generic (PLEG): container finished" podID="742dbb25-3782-45e5-931a-185f8a98d24d" containerID="784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4" exitCode=137 Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.253078 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c6c854b45-nzb7k" event={"ID":"742dbb25-3782-45e5-931a-185f8a98d24d","Type":"ContainerDied","Data":"3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1"} Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.275784 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c6c854b45-nzb7k" event={"ID":"742dbb25-3782-45e5-931a-185f8a98d24d","Type":"ContainerDied","Data":"784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4"} Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.275951 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c6c854b45-nzb7k" event={"ID":"742dbb25-3782-45e5-931a-185f8a98d24d","Type":"ContainerDied","Data":"e93bf89ff3c35e233076a83615765c98494191cc9181a35f1ada92b7b1e4959c"} Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.275925 4833 scope.go:117] "RemoveContainer" containerID="3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.253042 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c6c854b45-nzb7k" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.270098 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/742dbb25-3782-45e5-931a-185f8a98d24d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.278417 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/742dbb25-3782-45e5-931a-185f8a98d24d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.278433 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.278449 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px2tp\" (UniqueName: \"kubernetes.io/projected/742dbb25-3782-45e5-931a-185f8a98d24d-kube-api-access-px2tp\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.278460 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/742dbb25-3782-45e5-931a-185f8a98d24d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.292834 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8468b886b8-mz8xd" event={"ID":"ad7485d9-4e14-49c1-bf60-8a0146d26df0","Type":"ContainerStarted","Data":"8bcd710353bcadc7e1e8a34543b23325067dd8267ac9ecdc16233cc1a364d9b0"} Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.292877 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8468b886b8-mz8xd" event={"ID":"ad7485d9-4e14-49c1-bf60-8a0146d26df0","Type":"ContainerStarted","Data":"8b2285269288fec92ccfeb04b4fcff7d6d2e7977ef7204b7b2645e72995f8014"} Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.306212 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57","Type":"ContainerStarted","Data":"fb8e2f0f84bd8a07d18bcf60c47d7be03dd664ce7b57d5ca98b23cdd4da8a702"} Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.435916 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84df7d698f-5xhlf"] Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.436105 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84df7d698f-5xhlf" podUID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" containerName="neutron-api" containerID="cri-o://517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986" gracePeriod=30 Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.436511 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-84df7d698f-5xhlf" podUID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" containerName="neutron-httpd" containerID="cri-o://81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876" gracePeriod=30 Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.436678 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.460148 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75865f57f7-4q4h9"] Feb 19 13:05:50 crc kubenswrapper[4833]: E0219 13:05:50.460685 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742dbb25-3782-45e5-931a-185f8a98d24d" containerName="horizon-log" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.460853 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="742dbb25-3782-45e5-931a-185f8a98d24d" containerName="horizon-log" Feb 19 13:05:50 crc kubenswrapper[4833]: E0219 13:05:50.460960 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742dbb25-3782-45e5-931a-185f8a98d24d" containerName="horizon" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.461116 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="742dbb25-3782-45e5-931a-185f8a98d24d" containerName="horizon" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.464855 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="742dbb25-3782-45e5-931a-185f8a98d24d" containerName="horizon-log" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.464959 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="742dbb25-3782-45e5-931a-185f8a98d24d" containerName="horizon" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.465899 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.468543 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.476403 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75865f57f7-4q4h9"] Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.499251 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6c6c854b45-nzb7k"] Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.510435 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6c6c854b45-nzb7k"] Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.571839 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.585368 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwpng\" (UniqueName: \"kubernetes.io/projected/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-kube-api-access-lwpng\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.585468 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-public-tls-certs\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.585631 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-httpd-config\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.585706 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-internal-tls-certs\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.585736 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-config\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.585772 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-ovndb-tls-certs\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.585844 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-combined-ca-bundle\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.648668 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.675761 4833 scope.go:117] "RemoveContainer" containerID="784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.687837 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-httpd-config\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.687920 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-internal-tls-certs\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.687949 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-config\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.688208 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-ovndb-tls-certs\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.688284 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-combined-ca-bundle\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.688358 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwpng\" (UniqueName: \"kubernetes.io/projected/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-kube-api-access-lwpng\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.688710 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-public-tls-certs\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.699051 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-httpd-config\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.699275 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-config\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.710557 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-internal-tls-certs\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.727672 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwpng\" (UniqueName: \"kubernetes.io/projected/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-kube-api-access-lwpng\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.732266 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-combined-ca-bundle\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.738212 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-public-tls-certs\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.739276 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7-ovndb-tls-certs\") pod \"neutron-75865f57f7-4q4h9\" (UID: \"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7\") " pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.782390 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.817377 4833 scope.go:117] "RemoveContainer" containerID="3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1" Feb 19 13:05:50 crc kubenswrapper[4833]: E0219 13:05:50.817748 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1\": container with ID starting with 3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1 not found: ID does not exist" containerID="3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.817788 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1"} err="failed to get container status \"3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1\": rpc error: code = NotFound desc = could not find container \"3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1\": container with ID starting with 3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1 not found: ID does not exist" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.817837 4833 scope.go:117] "RemoveContainer" containerID="784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4" Feb 19 13:05:50 crc kubenswrapper[4833]: E0219 13:05:50.820317 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4\": container with ID starting with 784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4 not found: ID does not exist" containerID="784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.820341 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4"} err="failed to get container status \"784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4\": rpc error: code = NotFound desc = could not find container \"784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4\": container with ID starting with 784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4 not found: ID does not exist" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.820354 4833 scope.go:117] "RemoveContainer" containerID="3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.820694 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1"} err="failed to get container status \"3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1\": rpc error: code = NotFound desc = could not find container \"3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1\": container with ID starting with 3980653fdf65ab3e87ef4bea9ad49df8b7e77c5b61072a713a7e9a62f6e490d1 not found: ID does not exist" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.820707 4833 scope.go:117] "RemoveContainer" containerID="784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.823812 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4"} err="failed to get container status \"784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4\": rpc error: code = NotFound desc = could not find container \"784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4\": container with ID starting with 784f5967a6cb31482e89a7e4a8cc956b4398f812bf262c943e4994fd5143c4d4 not found: ID does not exist" Feb 19 13:05:50 crc kubenswrapper[4833]: I0219 13:05:50.938571 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:50.997548 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q26n\" (UniqueName: \"kubernetes.io/projected/a6ff486b-931e-4973-9bac-5d68a07e9991-kube-api-access-6q26n\") pod \"a6ff486b-931e-4973-9bac-5d68a07e9991\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:50.997865 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-scripts\") pod \"a6ff486b-931e-4973-9bac-5d68a07e9991\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:50.997926 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-config-data\") pod \"a6ff486b-931e-4973-9bac-5d68a07e9991\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:50.998008 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6ff486b-931e-4973-9bac-5d68a07e9991-horizon-secret-key\") pod \"a6ff486b-931e-4973-9bac-5d68a07e9991\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:50.998030 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6ff486b-931e-4973-9bac-5d68a07e9991-logs\") pod \"a6ff486b-931e-4973-9bac-5d68a07e9991\" (UID: \"a6ff486b-931e-4973-9bac-5d68a07e9991\") " Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:50.998856 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6ff486b-931e-4973-9bac-5d68a07e9991-logs" (OuterVolumeSpecName: "logs") pod "a6ff486b-931e-4973-9bac-5d68a07e9991" (UID: "a6ff486b-931e-4973-9bac-5d68a07e9991"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.005130 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6ff486b-931e-4973-9bac-5d68a07e9991-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a6ff486b-931e-4973-9bac-5d68a07e9991" (UID: "a6ff486b-931e-4973-9bac-5d68a07e9991"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.006579 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6ff486b-931e-4973-9bac-5d68a07e9991-kube-api-access-6q26n" (OuterVolumeSpecName: "kube-api-access-6q26n") pod "a6ff486b-931e-4973-9bac-5d68a07e9991" (UID: "a6ff486b-931e-4973-9bac-5d68a07e9991"). InnerVolumeSpecName "kube-api-access-6q26n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.064071 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-scripts" (OuterVolumeSpecName: "scripts") pod "a6ff486b-931e-4973-9bac-5d68a07e9991" (UID: "a6ff486b-931e-4973-9bac-5d68a07e9991"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.068674 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-config-data" (OuterVolumeSpecName: "config-data") pod "a6ff486b-931e-4973-9bac-5d68a07e9991" (UID: "a6ff486b-931e-4973-9bac-5d68a07e9991"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.102711 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.102737 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6ff486b-931e-4973-9bac-5d68a07e9991-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.102749 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6ff486b-931e-4973-9bac-5d68a07e9991-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.102757 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q26n\" (UniqueName: \"kubernetes.io/projected/a6ff486b-931e-4973-9bac-5d68a07e9991-kube-api-access-6q26n\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.102766 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6ff486b-931e-4973-9bac-5d68a07e9991-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.318220 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8","Type":"ContainerStarted","Data":"1ff8330995418f20e6a71c0eaa2b565d10db4946e2cd980f82365bc38b5f9f6c"} Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.318376 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.319841 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"8ffac1cb-2dd7-4ff9-92e1-a41a23411f57","Type":"ContainerStarted","Data":"fda69c16f44f567061f80e1ce27ce31a912d21ef839888ba48b6fb700eec5d41"} Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.319900 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.321458 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7594f6fd59-hxk8t" event={"ID":"a6ff486b-931e-4973-9bac-5d68a07e9991","Type":"ContainerDied","Data":"f6a94f6a7dae47ac1a91b4c62249b344f7a8676064d9c7b66dad989a6b188235"} Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.321486 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7594f6fd59-hxk8t" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.321531 4833 scope.go:117] "RemoveContainer" containerID="3d70c6239ac05d67654bb3afebc44385962eceb9e68f31b0b04f547ba1eec565" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.323522 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84df7d698f-5xhlf" event={"ID":"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae","Type":"ContainerDied","Data":"81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876"} Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.323489 4833 generic.go:334] "Generic (PLEG): container finished" podID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" containerID="81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876" exitCode=0 Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.326514 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-8468b886b8-mz8xd" event={"ID":"ad7485d9-4e14-49c1-bf60-8a0146d26df0","Type":"ContainerStarted","Data":"239cb664fb466880a4e92350b784fc6dbcb056cd7e82c62aa047a7fab59c9beb"} Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.327235 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.327262 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.347373 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.754281133 podStartE2EDuration="7.347356387s" podCreationTimestamp="2026-02-19 13:05:44 +0000 UTC" firstStartedPulling="2026-02-19 13:05:45.455128155 +0000 UTC m=+1155.850646953" lastFinishedPulling="2026-02-19 13:05:50.048203439 +0000 UTC m=+1160.443722207" observedRunningTime="2026-02-19 13:05:51.342843152 +0000 UTC m=+1161.738361920" watchObservedRunningTime="2026-02-19 13:05:51.347356387 +0000 UTC m=+1161.742875155" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.389883 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-8468b886b8-mz8xd" podStartSLOduration=3.389866547 podStartE2EDuration="3.389866547s" podCreationTimestamp="2026-02-19 13:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:51.383418338 +0000 UTC m=+1161.778937106" watchObservedRunningTime="2026-02-19 13:05:51.389866547 +0000 UTC m=+1161.785385315" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.429068 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.4290464959999998 podStartE2EDuration="3.429046496s" podCreationTimestamp="2026-02-19 13:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:51.409158433 +0000 UTC m=+1161.804677201" watchObservedRunningTime="2026-02-19 13:05:51.429046496 +0000 UTC m=+1161.824565264" Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.466544 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7594f6fd59-hxk8t"] Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.478547 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7594f6fd59-hxk8t"] Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.490319 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75865f57f7-4q4h9"] Feb 19 13:05:51 crc kubenswrapper[4833]: I0219 13:05:51.521516 4833 scope.go:117] "RemoveContainer" containerID="d65311c3ccec03d5f9352f0d0b8a53a3e84a5e84ddb604a730b44b0e95c93070" Feb 19 13:05:52 crc kubenswrapper[4833]: I0219 13:05:52.325567 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="742dbb25-3782-45e5-931a-185f8a98d24d" path="/var/lib/kubelet/pods/742dbb25-3782-45e5-931a-185f8a98d24d/volumes" Feb 19 13:05:52 crc kubenswrapper[4833]: I0219 13:05:52.326267 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6ff486b-931e-4973-9bac-5d68a07e9991" path="/var/lib/kubelet/pods/a6ff486b-931e-4973-9bac-5d68a07e9991/volumes" Feb 19 13:05:52 crc kubenswrapper[4833]: I0219 13:05:52.337153 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75865f57f7-4q4h9" event={"ID":"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7","Type":"ContainerStarted","Data":"1f713eaa915937865ef983e46f1654b17a4e4bee2f696bb88f680f38a062a6dc"} Feb 19 13:05:52 crc kubenswrapper[4833]: I0219 13:05:52.337217 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75865f57f7-4q4h9" event={"ID":"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7","Type":"ContainerStarted","Data":"33bdc5860d1ea4f6026172429df79d814f2631ae619c9200ed4a68a5aace20af"} Feb 19 13:05:52 crc kubenswrapper[4833]: I0219 13:05:52.337228 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75865f57f7-4q4h9" event={"ID":"cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7","Type":"ContainerStarted","Data":"bd724190b9ca34e64cb84bb11f3d0b5dd03223916e52f7939e01508334007365"} Feb 19 13:05:52 crc kubenswrapper[4833]: I0219 13:05:52.360918 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75865f57f7-4q4h9" podStartSLOduration=2.360897524 podStartE2EDuration="2.360897524s" podCreationTimestamp="2026-02-19 13:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:05:52.353250242 +0000 UTC m=+1162.748769010" watchObservedRunningTime="2026-02-19 13:05:52.360897524 +0000 UTC m=+1162.756416292" Feb 19 13:05:52 crc kubenswrapper[4833]: I0219 13:05:52.370092 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-84df7d698f-5xhlf" podUID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.157:9696/\": dial tcp 10.217.0.157:9696: connect: connection refused" Feb 19 13:05:52 crc kubenswrapper[4833]: I0219 13:05:52.509059 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:05:52 crc kubenswrapper[4833]: I0219 13:05:52.725372 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7b954444d4-2mwt9" Feb 19 13:05:52 crc kubenswrapper[4833]: I0219 13:05:52.802649 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bc667ffbb-qqgnx"] Feb 19 13:05:53 crc kubenswrapper[4833]: I0219 13:05:53.375730 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bc667ffbb-qqgnx" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon-log" containerID="cri-o://7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4" gracePeriod=30 Feb 19 13:05:53 crc kubenswrapper[4833]: I0219 13:05:53.376165 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:05:53 crc kubenswrapper[4833]: I0219 13:05:53.376247 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5bc667ffbb-qqgnx" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon" containerID="cri-o://da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c" gracePeriod=30 Feb 19 13:05:53 crc kubenswrapper[4833]: I0219 13:05:53.378929 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:05:53 crc kubenswrapper[4833]: I0219 13:05:53.494281 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hq965"] Feb 19 13:05:53 crc kubenswrapper[4833]: I0219 13:05:53.494565 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-hq965" podUID="99a828fb-7fd8-432c-890f-2cebf8e2afad" containerName="dnsmasq-dns" containerID="cri-o://d86d447a36669ce8992ef9ecdd5922ef4f999085d397e80792cbb4ee6e82ce56" gracePeriod=10 Feb 19 13:05:53 crc kubenswrapper[4833]: I0219 13:05:53.775396 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 13:05:53 crc kubenswrapper[4833]: I0219 13:05:53.833681 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:05:54 crc kubenswrapper[4833]: I0219 13:05:54.418334 4833 generic.go:334] "Generic (PLEG): container finished" podID="99a828fb-7fd8-432c-890f-2cebf8e2afad" containerID="d86d447a36669ce8992ef9ecdd5922ef4f999085d397e80792cbb4ee6e82ce56" exitCode=0 Feb 19 13:05:54 crc kubenswrapper[4833]: I0219 13:05:54.418373 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-hq965" event={"ID":"99a828fb-7fd8-432c-890f-2cebf8e2afad","Type":"ContainerDied","Data":"d86d447a36669ce8992ef9ecdd5922ef4f999085d397e80792cbb4ee6e82ce56"} Feb 19 13:05:54 crc kubenswrapper[4833]: I0219 13:05:54.418547 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c4616286-268c-4a6e-9542-699a0da55dbd" containerName="cinder-scheduler" containerID="cri-o://dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de" gracePeriod=30 Feb 19 13:05:54 crc kubenswrapper[4833]: I0219 13:05:54.418582 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c4616286-268c-4a6e-9542-699a0da55dbd" containerName="probe" containerID="cri-o://11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89" gracePeriod=30 Feb 19 13:05:54 crc kubenswrapper[4833]: I0219 13:05:54.838069 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.034981 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.084609 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-swift-storage-0\") pod \"99a828fb-7fd8-432c-890f-2cebf8e2afad\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.084649 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfx5h\" (UniqueName: \"kubernetes.io/projected/99a828fb-7fd8-432c-890f-2cebf8e2afad-kube-api-access-xfx5h\") pod \"99a828fb-7fd8-432c-890f-2cebf8e2afad\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.084720 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-svc\") pod \"99a828fb-7fd8-432c-890f-2cebf8e2afad\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.084842 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-nb\") pod \"99a828fb-7fd8-432c-890f-2cebf8e2afad\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.084882 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-config\") pod \"99a828fb-7fd8-432c-890f-2cebf8e2afad\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.084899 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-sb\") pod \"99a828fb-7fd8-432c-890f-2cebf8e2afad\" (UID: \"99a828fb-7fd8-432c-890f-2cebf8e2afad\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.128714 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99a828fb-7fd8-432c-890f-2cebf8e2afad-kube-api-access-xfx5h" (OuterVolumeSpecName: "kube-api-access-xfx5h") pod "99a828fb-7fd8-432c-890f-2cebf8e2afad" (UID: "99a828fb-7fd8-432c-890f-2cebf8e2afad"). InnerVolumeSpecName "kube-api-access-xfx5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.170240 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "99a828fb-7fd8-432c-890f-2cebf8e2afad" (UID: "99a828fb-7fd8-432c-890f-2cebf8e2afad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.170259 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-config" (OuterVolumeSpecName: "config") pod "99a828fb-7fd8-432c-890f-2cebf8e2afad" (UID: "99a828fb-7fd8-432c-890f-2cebf8e2afad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.172238 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99a828fb-7fd8-432c-890f-2cebf8e2afad" (UID: "99a828fb-7fd8-432c-890f-2cebf8e2afad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.186841 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.186870 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.186879 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfx5h\" (UniqueName: \"kubernetes.io/projected/99a828fb-7fd8-432c-890f-2cebf8e2afad-kube-api-access-xfx5h\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.186890 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.193092 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "99a828fb-7fd8-432c-890f-2cebf8e2afad" (UID: "99a828fb-7fd8-432c-890f-2cebf8e2afad"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.206423 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "99a828fb-7fd8-432c-890f-2cebf8e2afad" (UID: "99a828fb-7fd8-432c-890f-2cebf8e2afad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.217990 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.278121 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.288768 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-public-tls-certs\") pod \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.288820 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k8hf\" (UniqueName: \"kubernetes.io/projected/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-kube-api-access-6k8hf\") pod \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.288934 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-httpd-config\") pod \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.289145 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-combined-ca-bundle\") pod \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.289178 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-config\") pod \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.289244 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-ovndb-tls-certs\") pod \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.289308 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-internal-tls-certs\") pod \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\" (UID: \"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae\") " Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.289696 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.289712 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/99a828fb-7fd8-432c-890f-2cebf8e2afad-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.298659 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" (UID: "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.311229 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-kube-api-access-6k8hf" (OuterVolumeSpecName: "kube-api-access-6k8hf") pod "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" (UID: "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae"). InnerVolumeSpecName "kube-api-access-6k8hf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.352969 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" (UID: "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.361052 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-config" (OuterVolumeSpecName: "config") pod "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" (UID: "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.370900 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" (UID: "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.385650 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" (UID: "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.391795 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.391821 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.391832 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.391843 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.391854 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.391862 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k8hf\" (UniqueName: \"kubernetes.io/projected/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-kube-api-access-6k8hf\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.405941 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" (UID: "0935255f-9e1c-4009-b4d0-e7f4eadcd6ae"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.428110 4833 generic.go:334] "Generic (PLEG): container finished" podID="c4616286-268c-4a6e-9542-699a0da55dbd" containerID="11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89" exitCode=0 Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.428193 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4616286-268c-4a6e-9542-699a0da55dbd","Type":"ContainerDied","Data":"11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89"} Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.430131 4833 generic.go:334] "Generic (PLEG): container finished" podID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" containerID="517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986" exitCode=0 Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.430195 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84df7d698f-5xhlf" event={"ID":"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae","Type":"ContainerDied","Data":"517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986"} Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.430202 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-84df7d698f-5xhlf" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.430222 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-84df7d698f-5xhlf" event={"ID":"0935255f-9e1c-4009-b4d0-e7f4eadcd6ae","Type":"ContainerDied","Data":"cc99e463b036172f50b14fa9a70323a56f34fb1d5242dd065ce40489a219c649"} Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.430238 4833 scope.go:117] "RemoveContainer" containerID="81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.432779 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-hq965" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.432816 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-hq965" event={"ID":"99a828fb-7fd8-432c-890f-2cebf8e2afad","Type":"ContainerDied","Data":"55fb4fcb8be56b12eca21c54ec973b74ffa083ba9535bbb1cc1b000123de5ea8"} Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.455190 4833 scope.go:117] "RemoveContainer" containerID="517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.474409 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hq965"] Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.482777 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-hq965"] Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.486914 4833 scope.go:117] "RemoveContainer" containerID="81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876" Feb 19 13:05:55 crc kubenswrapper[4833]: E0219 13:05:55.487337 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876\": container with ID starting with 81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876 not found: ID does not exist" containerID="81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.487366 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876"} err="failed to get container status \"81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876\": rpc error: code = NotFound desc = could not find container \"81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876\": container with ID starting with 81df4e7e1418cb51eb1e2ec710fa80c6f24186e5bcf6282bb8711eec880df876 not found: ID does not exist" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.487400 4833 scope.go:117] "RemoveContainer" containerID="517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986" Feb 19 13:05:55 crc kubenswrapper[4833]: E0219 13:05:55.488840 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986\": container with ID starting with 517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986 not found: ID does not exist" containerID="517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.488866 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986"} err="failed to get container status \"517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986\": rpc error: code = NotFound desc = could not find container \"517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986\": container with ID starting with 517d2bc1b11f13728498d37500e666009324462a19a2505aafe1a6c0879ac986 not found: ID does not exist" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.488880 4833 scope.go:117] "RemoveContainer" containerID="d86d447a36669ce8992ef9ecdd5922ef4f999085d397e80792cbb4ee6e82ce56" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.490911 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-84df7d698f-5xhlf"] Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.493543 4833 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.498036 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-84df7d698f-5xhlf"] Feb 19 13:05:55 crc kubenswrapper[4833]: I0219 13:05:55.507542 4833 scope.go:117] "RemoveContainer" containerID="a685e5b34f15ba2f6c9efea24f3f9682b9d3051fed32e0c5afbb2aea44e71478" Feb 19 13:05:56 crc kubenswrapper[4833]: I0219 13:05:56.351777 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" path="/var/lib/kubelet/pods/0935255f-9e1c-4009-b4d0-e7f4eadcd6ae/volumes" Feb 19 13:05:56 crc kubenswrapper[4833]: I0219 13:05:56.353361 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99a828fb-7fd8-432c-890f-2cebf8e2afad" path="/var/lib/kubelet/pods/99a828fb-7fd8-432c-890f-2cebf8e2afad/volumes" Feb 19 13:05:57 crc kubenswrapper[4833]: I0219 13:05:57.457822 4833 generic.go:334] "Generic (PLEG): container finished" podID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerID="da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c" exitCode=0 Feb 19 13:05:57 crc kubenswrapper[4833]: I0219 13:05:57.457937 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bc667ffbb-qqgnx" event={"ID":"bbfe4179-53a2-4a74-9045-7a498c9aad70","Type":"ContainerDied","Data":"da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c"} Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.269604 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.331564 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5bc667ffbb-qqgnx" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.347313 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-scripts\") pod \"c4616286-268c-4a6e-9542-699a0da55dbd\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.347355 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55cnf\" (UniqueName: \"kubernetes.io/projected/c4616286-268c-4a6e-9542-699a0da55dbd-kube-api-access-55cnf\") pod \"c4616286-268c-4a6e-9542-699a0da55dbd\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.347407 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data-custom\") pod \"c4616286-268c-4a6e-9542-699a0da55dbd\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.347705 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-combined-ca-bundle\") pod \"c4616286-268c-4a6e-9542-699a0da55dbd\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.347741 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4616286-268c-4a6e-9542-699a0da55dbd-etc-machine-id\") pod \"c4616286-268c-4a6e-9542-699a0da55dbd\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.347756 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data\") pod \"c4616286-268c-4a6e-9542-699a0da55dbd\" (UID: \"c4616286-268c-4a6e-9542-699a0da55dbd\") " Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.347848 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4616286-268c-4a6e-9542-699a0da55dbd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c4616286-268c-4a6e-9542-699a0da55dbd" (UID: "c4616286-268c-4a6e-9542-699a0da55dbd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.348155 4833 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c4616286-268c-4a6e-9542-699a0da55dbd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.353438 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-scripts" (OuterVolumeSpecName: "scripts") pod "c4616286-268c-4a6e-9542-699a0da55dbd" (UID: "c4616286-268c-4a6e-9542-699a0da55dbd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.354541 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4616286-268c-4a6e-9542-699a0da55dbd-kube-api-access-55cnf" (OuterVolumeSpecName: "kube-api-access-55cnf") pod "c4616286-268c-4a6e-9542-699a0da55dbd" (UID: "c4616286-268c-4a6e-9542-699a0da55dbd"). InnerVolumeSpecName "kube-api-access-55cnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.355378 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c4616286-268c-4a6e-9542-699a0da55dbd" (UID: "c4616286-268c-4a6e-9542-699a0da55dbd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.412516 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4616286-268c-4a6e-9542-699a0da55dbd" (UID: "c4616286-268c-4a6e-9542-699a0da55dbd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.449803 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.449833 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.449842 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55cnf\" (UniqueName: \"kubernetes.io/projected/c4616286-268c-4a6e-9542-699a0da55dbd-kube-api-access-55cnf\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.449852 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.467990 4833 generic.go:334] "Generic (PLEG): container finished" podID="c4616286-268c-4a6e-9542-699a0da55dbd" containerID="dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de" exitCode=0 Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.468042 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4616286-268c-4a6e-9542-699a0da55dbd","Type":"ContainerDied","Data":"dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de"} Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.468090 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c4616286-268c-4a6e-9542-699a0da55dbd","Type":"ContainerDied","Data":"46cc0652b0bfa3a269efb79fe7f20e46db12b27a7504b7dea07a5bc246d44454"} Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.468078 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.468113 4833 scope.go:117] "RemoveContainer" containerID="11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.484563 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data" (OuterVolumeSpecName: "config-data") pod "c4616286-268c-4a6e-9542-699a0da55dbd" (UID: "c4616286-268c-4a6e-9542-699a0da55dbd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.499414 4833 scope.go:117] "RemoveContainer" containerID="dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.518554 4833 scope.go:117] "RemoveContainer" containerID="11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89" Feb 19 13:05:58 crc kubenswrapper[4833]: E0219 13:05:58.519085 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89\": container with ID starting with 11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89 not found: ID does not exist" containerID="11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.519131 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89"} err="failed to get container status \"11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89\": rpc error: code = NotFound desc = could not find container \"11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89\": container with ID starting with 11c38a94b2f30e3b7b153408175b6c1195eaaac414ff0edcd9bbd51a0cd72e89 not found: ID does not exist" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.519158 4833 scope.go:117] "RemoveContainer" containerID="dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de" Feb 19 13:05:58 crc kubenswrapper[4833]: E0219 13:05:58.520151 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de\": container with ID starting with dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de not found: ID does not exist" containerID="dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.520178 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de"} err="failed to get container status \"dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de\": rpc error: code = NotFound desc = could not find container \"dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de\": container with ID starting with dbfade14a6f4fbebaf8b88de10911a85033aac1b629beb5d84ab6c0d414960de not found: ID does not exist" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.551515 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4616286-268c-4a6e-9542-699a0da55dbd-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.820513 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.841426 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.851519 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:05:58 crc kubenswrapper[4833]: E0219 13:05:58.851921 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4616286-268c-4a6e-9542-699a0da55dbd" containerName="cinder-scheduler" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.851938 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4616286-268c-4a6e-9542-699a0da55dbd" containerName="cinder-scheduler" Feb 19 13:05:58 crc kubenswrapper[4833]: E0219 13:05:58.851959 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" containerName="neutron-api" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.851967 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" containerName="neutron-api" Feb 19 13:05:58 crc kubenswrapper[4833]: E0219 13:05:58.851980 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99a828fb-7fd8-432c-890f-2cebf8e2afad" containerName="dnsmasq-dns" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.851990 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="99a828fb-7fd8-432c-890f-2cebf8e2afad" containerName="dnsmasq-dns" Feb 19 13:05:58 crc kubenswrapper[4833]: E0219 13:05:58.852009 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ff486b-931e-4973-9bac-5d68a07e9991" containerName="horizon" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852015 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ff486b-931e-4973-9bac-5d68a07e9991" containerName="horizon" Feb 19 13:05:58 crc kubenswrapper[4833]: E0219 13:05:58.852026 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6ff486b-931e-4973-9bac-5d68a07e9991" containerName="horizon-log" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852032 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6ff486b-931e-4973-9bac-5d68a07e9991" containerName="horizon-log" Feb 19 13:05:58 crc kubenswrapper[4833]: E0219 13:05:58.852043 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99a828fb-7fd8-432c-890f-2cebf8e2afad" containerName="init" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852050 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="99a828fb-7fd8-432c-890f-2cebf8e2afad" containerName="init" Feb 19 13:05:58 crc kubenswrapper[4833]: E0219 13:05:58.852062 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" containerName="neutron-httpd" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852070 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" containerName="neutron-httpd" Feb 19 13:05:58 crc kubenswrapper[4833]: E0219 13:05:58.852080 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4616286-268c-4a6e-9542-699a0da55dbd" containerName="probe" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852087 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4616286-268c-4a6e-9542-699a0da55dbd" containerName="probe" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852265 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4616286-268c-4a6e-9542-699a0da55dbd" containerName="cinder-scheduler" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852280 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4616286-268c-4a6e-9542-699a0da55dbd" containerName="probe" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852289 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" containerName="neutron-httpd" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852305 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="99a828fb-7fd8-432c-890f-2cebf8e2afad" containerName="dnsmasq-dns" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852316 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0935255f-9e1c-4009-b4d0-e7f4eadcd6ae" containerName="neutron-api" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852329 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ff486b-931e-4973-9bac-5d68a07e9991" containerName="horizon" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.852341 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6ff486b-931e-4973-9bac-5d68a07e9991" containerName="horizon-log" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.853428 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.867227 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.876613 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.978388 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-scripts\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.978661 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.978853 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.978959 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtbpw\" (UniqueName: \"kubernetes.io/projected/77866722-bd38-4757-b8a0-d2939b40d2ee-kube-api-access-xtbpw\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.979069 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-config-data\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:58 crc kubenswrapper[4833]: I0219 13:05:58.979184 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77866722-bd38-4757-b8a0-d2939b40d2ee-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.081188 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtbpw\" (UniqueName: \"kubernetes.io/projected/77866722-bd38-4757-b8a0-d2939b40d2ee-kube-api-access-xtbpw\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.081244 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-config-data\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.081282 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77866722-bd38-4757-b8a0-d2939b40d2ee-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.081315 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-scripts\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.081372 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.081415 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.081766 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77866722-bd38-4757-b8a0-d2939b40d2ee-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.086442 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-scripts\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.086540 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.087054 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-config-data\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.089074 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77866722-bd38-4757-b8a0-d2939b40d2ee-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.105994 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtbpw\" (UniqueName: \"kubernetes.io/projected/77866722-bd38-4757-b8a0-d2939b40d2ee-kube-api-access-xtbpw\") pod \"cinder-scheduler-0\" (UID: \"77866722-bd38-4757-b8a0-d2939b40d2ee\") " pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.210146 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.520908 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:05:59 crc kubenswrapper[4833]: I0219 13:05:59.778544 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 19 13:06:00 crc kubenswrapper[4833]: I0219 13:06:00.335033 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4616286-268c-4a6e-9542-699a0da55dbd" path="/var/lib/kubelet/pods/c4616286-268c-4a6e-9542-699a0da55dbd/volumes" Feb 19 13:06:00 crc kubenswrapper[4833]: I0219 13:06:00.507233 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77866722-bd38-4757-b8a0-d2939b40d2ee","Type":"ContainerStarted","Data":"bf328b6e48ef3ff87cb93b86102b92e34176f11c10b6d625318dce34db0b2883"} Feb 19 13:06:00 crc kubenswrapper[4833]: I0219 13:06:00.507560 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77866722-bd38-4757-b8a0-d2939b40d2ee","Type":"ContainerStarted","Data":"d2103357b7378748f2d83bf64bf1459b6a0f65f086a32ae9da41a68ef26265e9"} Feb 19 13:06:00 crc kubenswrapper[4833]: I0219 13:06:00.818411 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 19 13:06:00 crc kubenswrapper[4833]: I0219 13:06:00.971388 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.245641 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.265926 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.270795 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-64f9d5d984-h9kbm" Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.317845 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-54c7bb578f-26gwx" Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.363644 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b8757d6bd-6749q"] Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.518886 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"77866722-bd38-4757-b8a0-d2939b40d2ee","Type":"ContainerStarted","Data":"1ec0e862b54ac263fe240eb6c24a898ba2a8eb9f6cc03b093e4aa2919bb9821e"} Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.518945 4833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.519032 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-b8757d6bd-6749q" podUID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerName="placement-log" containerID="cri-o://e7621c18f354105869f8b96b2124f21516919184cd1d3c15215fa7c4984c85fe" gracePeriod=30 Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.519213 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-b8757d6bd-6749q" podUID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerName="placement-api" containerID="cri-o://d01869bb2a8ef1704e5da7b6e3da6a215e0b7263f86c920ad1ac737c8a3e8137" gracePeriod=30 Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.523485 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-b8757d6bd-6749q" podUID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.159:8778/\": EOF" Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.546907 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.546891214 podStartE2EDuration="3.546891214s" podCreationTimestamp="2026-02-19 13:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:06:01.540010489 +0000 UTC m=+1171.935529257" watchObservedRunningTime="2026-02-19 13:06:01.546891214 +0000 UTC m=+1171.942409982" Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.683657 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-8468b886b8-mz8xd" Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.745552 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64db8648b4-fbc89"] Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.745853 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64db8648b4-fbc89" podUID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerName="barbican-api-log" containerID="cri-o://50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e" gracePeriod=30 Feb 19 13:06:01 crc kubenswrapper[4833]: I0219 13:06:01.746450 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64db8648b4-fbc89" podUID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerName="barbican-api" containerID="cri-o://1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423" gracePeriod=30 Feb 19 13:06:02 crc kubenswrapper[4833]: I0219 13:06:02.526410 4833 generic.go:334] "Generic (PLEG): container finished" podID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerID="50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e" exitCode=143 Feb 19 13:06:02 crc kubenswrapper[4833]: I0219 13:06:02.526485 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64db8648b4-fbc89" event={"ID":"ca2586c3-a9f8-45ed-a4e6-406a64304f7d","Type":"ContainerDied","Data":"50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e"} Feb 19 13:06:02 crc kubenswrapper[4833]: I0219 13:06:02.528845 4833 generic.go:334] "Generic (PLEG): container finished" podID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerID="e7621c18f354105869f8b96b2124f21516919184cd1d3c15215fa7c4984c85fe" exitCode=143 Feb 19 13:06:02 crc kubenswrapper[4833]: I0219 13:06:02.528873 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b8757d6bd-6749q" event={"ID":"315865dd-deeb-4ad9-8cce-15b7df356b6c","Type":"ContainerDied","Data":"e7621c18f354105869f8b96b2124f21516919184cd1d3c15215fa7c4984c85fe"} Feb 19 13:06:04 crc kubenswrapper[4833]: I0219 13:06:04.210671 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 19 13:06:04 crc kubenswrapper[4833]: I0219 13:06:04.912390 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64db8648b4-fbc89" podUID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:59836->10.217.0.164:9311: read: connection reset by peer" Feb 19 13:06:04 crc kubenswrapper[4833]: I0219 13:06:04.912412 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64db8648b4-fbc89" podUID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:59842->10.217.0.164:9311: read: connection reset by peer" Feb 19 13:06:04 crc kubenswrapper[4833]: I0219 13:06:04.933256 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-b8757d6bd-6749q" podUID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.159:8778/\": read tcp 10.217.0.2:53038->10.217.0.159:8778: read: connection reset by peer" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.318127 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.319552 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.322279 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-zsgl5" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.322559 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.322675 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.350049 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.425626 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config-secret\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.425757 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-combined-ca-bundle\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.425820 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.425838 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crvxl\" (UniqueName: \"kubernetes.io/projected/15217ca3-b014-4f16-8a89-8e3c63565360-kube-api-access-crvxl\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.527974 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-combined-ca-bundle\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.528073 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.528097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crvxl\" (UniqueName: \"kubernetes.io/projected/15217ca3-b014-4f16-8a89-8e3c63565360-kube-api-access-crvxl\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.528149 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config-secret\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.529425 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.533422 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config-secret\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.540170 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-combined-ca-bundle\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.544842 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crvxl\" (UniqueName: \"kubernetes.io/projected/15217ca3-b014-4f16-8a89-8e3c63565360-kube-api-access-crvxl\") pod \"openstackclient\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.570428 4833 generic.go:334] "Generic (PLEG): container finished" podID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerID="d01869bb2a8ef1704e5da7b6e3da6a215e0b7263f86c920ad1ac737c8a3e8137" exitCode=0 Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.570535 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b8757d6bd-6749q" event={"ID":"315865dd-deeb-4ad9-8cce-15b7df356b6c","Type":"ContainerDied","Data":"d01869bb2a8ef1704e5da7b6e3da6a215e0b7263f86c920ad1ac737c8a3e8137"} Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.570755 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-b8757d6bd-6749q" event={"ID":"315865dd-deeb-4ad9-8cce-15b7df356b6c","Type":"ContainerDied","Data":"e38383ccb04c1ca9d24dfdb1f59d23dc001b01f71c23f809ac4fa9dc6d6fc550"} Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.570772 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e38383ccb04c1ca9d24dfdb1f59d23dc001b01f71c23f809ac4fa9dc6d6fc550" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.572849 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.572975 4833 generic.go:334] "Generic (PLEG): container finished" podID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerID="1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423" exitCode=0 Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.573005 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64db8648b4-fbc89" event={"ID":"ca2586c3-a9f8-45ed-a4e6-406a64304f7d","Type":"ContainerDied","Data":"1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423"} Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.573143 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64db8648b4-fbc89" event={"ID":"ca2586c3-a9f8-45ed-a4e6-406a64304f7d","Type":"ContainerDied","Data":"dea60f8d6a0dca4e1b62704d4d6feea53d6d0d8b7a8bce92aee1aad543d6e53e"} Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.573171 4833 scope.go:117] "RemoveContainer" containerID="1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.579441 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.642384 4833 scope.go:117] "RemoveContainer" containerID="50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.644165 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.644790 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.652112 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.704046 4833 scope.go:117] "RemoveContainer" containerID="1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423" Feb 19 13:06:05 crc kubenswrapper[4833]: E0219 13:06:05.704528 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423\": container with ID starting with 1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423 not found: ID does not exist" containerID="1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.704561 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423"} err="failed to get container status \"1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423\": rpc error: code = NotFound desc = could not find container \"1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423\": container with ID starting with 1ec1078e71a1570ecd9602f633ec697007dc7275cefb571bd462bcf0c18aa423 not found: ID does not exist" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.704581 4833 scope.go:117] "RemoveContainer" containerID="50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e" Feb 19 13:06:05 crc kubenswrapper[4833]: E0219 13:06:05.705108 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e\": container with ID starting with 50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e not found: ID does not exist" containerID="50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.705168 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e"} err="failed to get container status \"50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e\": rpc error: code = NotFound desc = could not find container \"50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e\": container with ID starting with 50df0bc3d921acdbe90a5a5abb42a203ac9399d4c01c80a34c5fdeafd6fd509e not found: ID does not exist" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.712083 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 19 13:06:05 crc kubenswrapper[4833]: E0219 13:06:05.712527 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerName="placement-api" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.712548 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerName="placement-api" Feb 19 13:06:05 crc kubenswrapper[4833]: E0219 13:06:05.712563 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerName="barbican-api-log" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.712569 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerName="barbican-api-log" Feb 19 13:06:05 crc kubenswrapper[4833]: E0219 13:06:05.712630 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerName="placement-log" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.712637 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerName="placement-log" Feb 19 13:06:05 crc kubenswrapper[4833]: E0219 13:06:05.712650 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerName="barbican-api" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.712655 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerName="barbican-api" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.712807 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerName="barbican-api" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.712840 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerName="placement-api" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.712860 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="315865dd-deeb-4ad9-8cce-15b7df356b6c" containerName="placement-log" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.712871 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" containerName="barbican-api-log" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.714025 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.729108 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732369 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/315865dd-deeb-4ad9-8cce-15b7df356b6c-logs\") pod \"315865dd-deeb-4ad9-8cce-15b7df356b6c\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732416 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-combined-ca-bundle\") pod \"315865dd-deeb-4ad9-8cce-15b7df356b6c\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732461 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-internal-tls-certs\") pod \"315865dd-deeb-4ad9-8cce-15b7df356b6c\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732505 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th7tw\" (UniqueName: \"kubernetes.io/projected/315865dd-deeb-4ad9-8cce-15b7df356b6c-kube-api-access-th7tw\") pod \"315865dd-deeb-4ad9-8cce-15b7df356b6c\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732598 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-combined-ca-bundle\") pod \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732620 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-config-data\") pod \"315865dd-deeb-4ad9-8cce-15b7df356b6c\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732646 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-logs\") pod \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732686 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-public-tls-certs\") pod \"315865dd-deeb-4ad9-8cce-15b7df356b6c\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732711 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data-custom\") pod \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732770 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dbm4\" (UniqueName: \"kubernetes.io/projected/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-kube-api-access-8dbm4\") pod \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732791 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data\") pod \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\" (UID: \"ca2586c3-a9f8-45ed-a4e6-406a64304f7d\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.732843 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-scripts\") pod \"315865dd-deeb-4ad9-8cce-15b7df356b6c\" (UID: \"315865dd-deeb-4ad9-8cce-15b7df356b6c\") " Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.734125 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-logs" (OuterVolumeSpecName: "logs") pod "ca2586c3-a9f8-45ed-a4e6-406a64304f7d" (UID: "ca2586c3-a9f8-45ed-a4e6-406a64304f7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.734828 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/315865dd-deeb-4ad9-8cce-15b7df356b6c-logs" (OuterVolumeSpecName: "logs") pod "315865dd-deeb-4ad9-8cce-15b7df356b6c" (UID: "315865dd-deeb-4ad9-8cce-15b7df356b6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.737113 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-scripts" (OuterVolumeSpecName: "scripts") pod "315865dd-deeb-4ad9-8cce-15b7df356b6c" (UID: "315865dd-deeb-4ad9-8cce-15b7df356b6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.738528 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-kube-api-access-8dbm4" (OuterVolumeSpecName: "kube-api-access-8dbm4") pod "ca2586c3-a9f8-45ed-a4e6-406a64304f7d" (UID: "ca2586c3-a9f8-45ed-a4e6-406a64304f7d"). InnerVolumeSpecName "kube-api-access-8dbm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.739628 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ca2586c3-a9f8-45ed-a4e6-406a64304f7d" (UID: "ca2586c3-a9f8-45ed-a4e6-406a64304f7d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.743861 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/315865dd-deeb-4ad9-8cce-15b7df356b6c-kube-api-access-th7tw" (OuterVolumeSpecName: "kube-api-access-th7tw") pod "315865dd-deeb-4ad9-8cce-15b7df356b6c" (UID: "315865dd-deeb-4ad9-8cce-15b7df356b6c"). InnerVolumeSpecName "kube-api-access-th7tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.766482 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca2586c3-a9f8-45ed-a4e6-406a64304f7d" (UID: "ca2586c3-a9f8-45ed-a4e6-406a64304f7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: E0219 13:06:05.778086 4833 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 19 13:06:05 crc kubenswrapper[4833]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_15217ca3-b014-4f16-8a89-8e3c63565360_0(3b704c35f17f2545a13f7ed218e329723cab7c031d694cbc1731278ed00404ab): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3b704c35f17f2545a13f7ed218e329723cab7c031d694cbc1731278ed00404ab" Netns:"/var/run/netns/40933d23-0d7b-44c1-933c-616a5c7644dc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=3b704c35f17f2545a13f7ed218e329723cab7c031d694cbc1731278ed00404ab;K8S_POD_UID=15217ca3-b014-4f16-8a89-8e3c63565360" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/15217ca3-b014-4f16-8a89-8e3c63565360]: expected pod UID "15217ca3-b014-4f16-8a89-8e3c63565360" but got "df72920d-e022-48f9-b41c-f2fe6ed14da9" from Kube API Feb 19 13:06:05 crc kubenswrapper[4833]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 13:06:05 crc kubenswrapper[4833]: > Feb 19 13:06:05 crc kubenswrapper[4833]: E0219 13:06:05.778174 4833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 19 13:06:05 crc kubenswrapper[4833]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_15217ca3-b014-4f16-8a89-8e3c63565360_0(3b704c35f17f2545a13f7ed218e329723cab7c031d694cbc1731278ed00404ab): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3b704c35f17f2545a13f7ed218e329723cab7c031d694cbc1731278ed00404ab" Netns:"/var/run/netns/40933d23-0d7b-44c1-933c-616a5c7644dc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=3b704c35f17f2545a13f7ed218e329723cab7c031d694cbc1731278ed00404ab;K8S_POD_UID=15217ca3-b014-4f16-8a89-8e3c63565360" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/15217ca3-b014-4f16-8a89-8e3c63565360]: expected pod UID "15217ca3-b014-4f16-8a89-8e3c63565360" but got "df72920d-e022-48f9-b41c-f2fe6ed14da9" from Kube API Feb 19 13:06:05 crc kubenswrapper[4833]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 19 13:06:05 crc kubenswrapper[4833]: > pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.794711 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-config-data" (OuterVolumeSpecName: "config-data") pod "315865dd-deeb-4ad9-8cce-15b7df356b6c" (UID: "315865dd-deeb-4ad9-8cce-15b7df356b6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.797620 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data" (OuterVolumeSpecName: "config-data") pod "ca2586c3-a9f8-45ed-a4e6-406a64304f7d" (UID: "ca2586c3-a9f8-45ed-a4e6-406a64304f7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.799205 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "315865dd-deeb-4ad9-8cce-15b7df356b6c" (UID: "315865dd-deeb-4ad9-8cce-15b7df356b6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.836844 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/df72920d-e022-48f9-b41c-f2fe6ed14da9-openstack-config-secret\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837031 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df72920d-e022-48f9-b41c-f2fe6ed14da9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837171 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/df72920d-e022-48f9-b41c-f2fe6ed14da9-openstack-config\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837233 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcvv2\" (UniqueName: \"kubernetes.io/projected/df72920d-e022-48f9-b41c-f2fe6ed14da9-kube-api-access-xcvv2\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837389 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th7tw\" (UniqueName: \"kubernetes.io/projected/315865dd-deeb-4ad9-8cce-15b7df356b6c-kube-api-access-th7tw\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837412 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837422 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837432 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837441 4833 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837450 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dbm4\" (UniqueName: \"kubernetes.io/projected/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-kube-api-access-8dbm4\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837460 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca2586c3-a9f8-45ed-a4e6-406a64304f7d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837468 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837476 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/315865dd-deeb-4ad9-8cce-15b7df356b6c-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.837484 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.854138 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "315865dd-deeb-4ad9-8cce-15b7df356b6c" (UID: "315865dd-deeb-4ad9-8cce-15b7df356b6c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.856433 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "315865dd-deeb-4ad9-8cce-15b7df356b6c" (UID: "315865dd-deeb-4ad9-8cce-15b7df356b6c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.938774 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/df72920d-e022-48f9-b41c-f2fe6ed14da9-openstack-config\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.939046 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcvv2\" (UniqueName: \"kubernetes.io/projected/df72920d-e022-48f9-b41c-f2fe6ed14da9-kube-api-access-xcvv2\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.939235 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/df72920d-e022-48f9-b41c-f2fe6ed14da9-openstack-config-secret\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.939370 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df72920d-e022-48f9-b41c-f2fe6ed14da9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.939482 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.939578 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/315865dd-deeb-4ad9-8cce-15b7df356b6c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.939782 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/df72920d-e022-48f9-b41c-f2fe6ed14da9-openstack-config\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.942456 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/df72920d-e022-48f9-b41c-f2fe6ed14da9-openstack-config-secret\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.943546 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df72920d-e022-48f9-b41c-f2fe6ed14da9-combined-ca-bundle\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:05 crc kubenswrapper[4833]: I0219 13:06:05.954288 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcvv2\" (UniqueName: \"kubernetes.io/projected/df72920d-e022-48f9-b41c-f2fe6ed14da9-kube-api-access-xcvv2\") pod \"openstackclient\" (UID: \"df72920d-e022-48f9-b41c-f2fe6ed14da9\") " pod="openstack/openstackclient" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.039694 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.547853 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 19 13:06:06 crc kubenswrapper[4833]: W0219 13:06:06.552638 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf72920d_e022_48f9_b41c_f2fe6ed14da9.slice/crio-4d7f76af817e0ca33027be599859c1bbfb0b6899669489efd03adcbe4c6972fe WatchSource:0}: Error finding container 4d7f76af817e0ca33027be599859c1bbfb0b6899669489efd03adcbe4c6972fe: Status 404 returned error can't find the container with id 4d7f76af817e0ca33027be599859c1bbfb0b6899669489efd03adcbe4c6972fe Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.582040 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"df72920d-e022-48f9-b41c-f2fe6ed14da9","Type":"ContainerStarted","Data":"4d7f76af817e0ca33027be599859c1bbfb0b6899669489efd03adcbe4c6972fe"} Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.583284 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.583315 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64db8648b4-fbc89" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.583302 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-b8757d6bd-6749q" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.607905 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.628336 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-b8757d6bd-6749q"] Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.634158 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="15217ca3-b014-4f16-8a89-8e3c63565360" podUID="df72920d-e022-48f9-b41c-f2fe6ed14da9" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.639808 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-b8757d6bd-6749q"] Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.646972 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64db8648b4-fbc89"] Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.654328 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-64db8648b4-fbc89"] Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.751416 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crvxl\" (UniqueName: \"kubernetes.io/projected/15217ca3-b014-4f16-8a89-8e3c63565360-kube-api-access-crvxl\") pod \"15217ca3-b014-4f16-8a89-8e3c63565360\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.751573 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config-secret\") pod \"15217ca3-b014-4f16-8a89-8e3c63565360\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.751601 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config\") pod \"15217ca3-b014-4f16-8a89-8e3c63565360\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.751619 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-combined-ca-bundle\") pod \"15217ca3-b014-4f16-8a89-8e3c63565360\" (UID: \"15217ca3-b014-4f16-8a89-8e3c63565360\") " Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.752285 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "15217ca3-b014-4f16-8a89-8e3c63565360" (UID: "15217ca3-b014-4f16-8a89-8e3c63565360"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.775643 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "15217ca3-b014-4f16-8a89-8e3c63565360" (UID: "15217ca3-b014-4f16-8a89-8e3c63565360"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.775683 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15217ca3-b014-4f16-8a89-8e3c63565360" (UID: "15217ca3-b014-4f16-8a89-8e3c63565360"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.775742 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15217ca3-b014-4f16-8a89-8e3c63565360-kube-api-access-crvxl" (OuterVolumeSpecName: "kube-api-access-crvxl") pod "15217ca3-b014-4f16-8a89-8e3c63565360" (UID: "15217ca3-b014-4f16-8a89-8e3c63565360"). InnerVolumeSpecName "kube-api-access-crvxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.853849 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crvxl\" (UniqueName: \"kubernetes.io/projected/15217ca3-b014-4f16-8a89-8e3c63565360-kube-api-access-crvxl\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.853894 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.853908 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/15217ca3-b014-4f16-8a89-8e3c63565360-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:06 crc kubenswrapper[4833]: I0219 13:06:06.853919 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15217ca3-b014-4f16-8a89-8e3c63565360-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:07 crc kubenswrapper[4833]: I0219 13:06:07.590098 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 19 13:06:07 crc kubenswrapper[4833]: I0219 13:06:07.603129 4833 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="15217ca3-b014-4f16-8a89-8e3c63565360" podUID="df72920d-e022-48f9-b41c-f2fe6ed14da9" Feb 19 13:06:08 crc kubenswrapper[4833]: I0219 13:06:08.329484 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5bc667ffbb-qqgnx" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 19 13:06:08 crc kubenswrapper[4833]: I0219 13:06:08.331535 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15217ca3-b014-4f16-8a89-8e3c63565360" path="/var/lib/kubelet/pods/15217ca3-b014-4f16-8a89-8e3c63565360/volumes" Feb 19 13:06:08 crc kubenswrapper[4833]: I0219 13:06:08.332227 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="315865dd-deeb-4ad9-8cce-15b7df356b6c" path="/var/lib/kubelet/pods/315865dd-deeb-4ad9-8cce-15b7df356b6c/volumes" Feb 19 13:06:08 crc kubenswrapper[4833]: I0219 13:06:08.333322 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2586c3-a9f8-45ed-a4e6-406a64304f7d" path="/var/lib/kubelet/pods/ca2586c3-a9f8-45ed-a4e6-406a64304f7d/volumes" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.096398 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-756fd4958c-8cv9q"] Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.099594 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.105830 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.106027 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.106128 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.129685 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-756fd4958c-8cv9q"] Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.200702 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c148317b-fc12-4940-8fb0-587c8eff29f9-run-httpd\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.201004 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4w8k\" (UniqueName: \"kubernetes.io/projected/c148317b-fc12-4940-8fb0-587c8eff29f9-kube-api-access-w4w8k\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.201028 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-combined-ca-bundle\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.201086 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-internal-tls-certs\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.201106 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-public-tls-certs\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.201213 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c148317b-fc12-4940-8fb0-587c8eff29f9-etc-swift\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.201262 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-config-data\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.201298 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c148317b-fc12-4940-8fb0-587c8eff29f9-log-httpd\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.303349 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c148317b-fc12-4940-8fb0-587c8eff29f9-run-httpd\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.303403 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4w8k\" (UniqueName: \"kubernetes.io/projected/c148317b-fc12-4940-8fb0-587c8eff29f9-kube-api-access-w4w8k\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.303430 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-combined-ca-bundle\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.303486 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-internal-tls-certs\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.303538 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-public-tls-certs\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.303621 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c148317b-fc12-4940-8fb0-587c8eff29f9-etc-swift\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.303662 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-config-data\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.303680 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c148317b-fc12-4940-8fb0-587c8eff29f9-log-httpd\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.304216 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c148317b-fc12-4940-8fb0-587c8eff29f9-log-httpd\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.304431 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c148317b-fc12-4940-8fb0-587c8eff29f9-run-httpd\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.312308 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-public-tls-certs\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.312563 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-config-data\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.312608 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/c148317b-fc12-4940-8fb0-587c8eff29f9-etc-swift\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.313623 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-combined-ca-bundle\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.325159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c148317b-fc12-4940-8fb0-587c8eff29f9-internal-tls-certs\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.325866 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4w8k\" (UniqueName: \"kubernetes.io/projected/c148317b-fc12-4940-8fb0-587c8eff29f9-kube-api-access-w4w8k\") pod \"swift-proxy-756fd4958c-8cv9q\" (UID: \"c148317b-fc12-4940-8fb0-587c8eff29f9\") " pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.424072 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.509692 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 19 13:06:09 crc kubenswrapper[4833]: I0219 13:06:09.985810 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-756fd4958c-8cv9q"] Feb 19 13:06:09 crc kubenswrapper[4833]: W0219 13:06:09.997793 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc148317b_fc12_4940_8fb0_587c8eff29f9.slice/crio-1def749c6c8502b09a560865fbd3042e1b7815185214e0d58e8b370085e4ae6e WatchSource:0}: Error finding container 1def749c6c8502b09a560865fbd3042e1b7815185214e0d58e8b370085e4ae6e: Status 404 returned error can't find the container with id 1def749c6c8502b09a560865fbd3042e1b7815185214e0d58e8b370085e4ae6e Feb 19 13:06:10 crc kubenswrapper[4833]: I0219 13:06:10.622733 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-756fd4958c-8cv9q" event={"ID":"c148317b-fc12-4940-8fb0-587c8eff29f9","Type":"ContainerStarted","Data":"ea90c850e8ce1d3e596204a42449299297d4a761a11ef7197f594442fd5d493f"} Feb 19 13:06:10 crc kubenswrapper[4833]: I0219 13:06:10.623429 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:10 crc kubenswrapper[4833]: I0219 13:06:10.623450 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-756fd4958c-8cv9q" event={"ID":"c148317b-fc12-4940-8fb0-587c8eff29f9","Type":"ContainerStarted","Data":"253b134bebc37c93510e3cdd03c2265447b538e449ee0915902bf5e165dc2fa5"} Feb 19 13:06:10 crc kubenswrapper[4833]: I0219 13:06:10.623467 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-756fd4958c-8cv9q" event={"ID":"c148317b-fc12-4940-8fb0-587c8eff29f9","Type":"ContainerStarted","Data":"1def749c6c8502b09a560865fbd3042e1b7815185214e0d58e8b370085e4ae6e"} Feb 19 13:06:10 crc kubenswrapper[4833]: I0219 13:06:10.648614 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-756fd4958c-8cv9q" podStartSLOduration=1.6485942439999999 podStartE2EDuration="1.648594244s" podCreationTimestamp="2026-02-19 13:06:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:06:10.644686599 +0000 UTC m=+1181.040205367" watchObservedRunningTime="2026-02-19 13:06:10.648594244 +0000 UTC m=+1181.044113012" Feb 19 13:06:11 crc kubenswrapper[4833]: I0219 13:06:11.297021 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:11 crc kubenswrapper[4833]: I0219 13:06:11.297392 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="sg-core" containerID="cri-o://890084e668006b412cde3e4304a29578481436109e36e60e90cdb7a4e522ae15" gracePeriod=30 Feb 19 13:06:11 crc kubenswrapper[4833]: I0219 13:06:11.297405 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="ceilometer-notification-agent" containerID="cri-o://d1cd809acdad1a7ca6ab899ee8581bdb370ab1466b0d8dfd3345f39155c51d86" gracePeriod=30 Feb 19 13:06:11 crc kubenswrapper[4833]: I0219 13:06:11.297435 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="proxy-httpd" containerID="cri-o://1ff8330995418f20e6a71c0eaa2b565d10db4946e2cd980f82365bc38b5f9f6c" gracePeriod=30 Feb 19 13:06:11 crc kubenswrapper[4833]: I0219 13:06:11.297350 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="ceilometer-central-agent" containerID="cri-o://6f784bd68afc71d989886878e0c43b47e29ee1433e0c84dd8daaca1d15bf022e" gracePeriod=30 Feb 19 13:06:11 crc kubenswrapper[4833]: I0219 13:06:11.312125 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": EOF" Feb 19 13:06:11 crc kubenswrapper[4833]: I0219 13:06:11.633786 4833 generic.go:334] "Generic (PLEG): container finished" podID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerID="1ff8330995418f20e6a71c0eaa2b565d10db4946e2cd980f82365bc38b5f9f6c" exitCode=0 Feb 19 13:06:11 crc kubenswrapper[4833]: I0219 13:06:11.633821 4833 generic.go:334] "Generic (PLEG): container finished" podID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerID="890084e668006b412cde3e4304a29578481436109e36e60e90cdb7a4e522ae15" exitCode=2 Feb 19 13:06:11 crc kubenswrapper[4833]: I0219 13:06:11.633875 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8","Type":"ContainerDied","Data":"1ff8330995418f20e6a71c0eaa2b565d10db4946e2cd980f82365bc38b5f9f6c"} Feb 19 13:06:11 crc kubenswrapper[4833]: I0219 13:06:11.633941 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8","Type":"ContainerDied","Data":"890084e668006b412cde3e4304a29578481436109e36e60e90cdb7a4e522ae15"} Feb 19 13:06:11 crc kubenswrapper[4833]: I0219 13:06:11.634042 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:12 crc kubenswrapper[4833]: I0219 13:06:12.649042 4833 generic.go:334] "Generic (PLEG): container finished" podID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerID="6f784bd68afc71d989886878e0c43b47e29ee1433e0c84dd8daaca1d15bf022e" exitCode=0 Feb 19 13:06:12 crc kubenswrapper[4833]: I0219 13:06:12.649130 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8","Type":"ContainerDied","Data":"6f784bd68afc71d989886878e0c43b47e29ee1433e0c84dd8daaca1d15bf022e"} Feb 19 13:06:14 crc kubenswrapper[4833]: I0219 13:06:14.526256 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": dial tcp 10.217.0.168:3000: connect: connection refused" Feb 19 13:06:14 crc kubenswrapper[4833]: I0219 13:06:14.672531 4833 generic.go:334] "Generic (PLEG): container finished" podID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerID="d1cd809acdad1a7ca6ab899ee8581bdb370ab1466b0d8dfd3345f39155c51d86" exitCode=0 Feb 19 13:06:14 crc kubenswrapper[4833]: I0219 13:06:14.672574 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8","Type":"ContainerDied","Data":"d1cd809acdad1a7ca6ab899ee8581bdb370ab1466b0d8dfd3345f39155c51d86"} Feb 19 13:06:15 crc kubenswrapper[4833]: I0219 13:06:15.744431 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:06:15 crc kubenswrapper[4833]: I0219 13:06:15.744498 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.414946 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.581221 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-scripts\") pod \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.581290 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sr9z\" (UniqueName: \"kubernetes.io/projected/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-kube-api-access-7sr9z\") pod \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.581333 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-config-data\") pod \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.581413 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-log-httpd\") pod \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.581465 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-combined-ca-bundle\") pod \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.581499 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-sg-core-conf-yaml\") pod \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.581574 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-run-httpd\") pod \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\" (UID: \"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8\") " Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.582092 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" (UID: "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.582135 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" (UID: "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.592047 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-scripts" (OuterVolumeSpecName: "scripts") pod "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" (UID: "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.592086 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-kube-api-access-7sr9z" (OuterVolumeSpecName: "kube-api-access-7sr9z") pod "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" (UID: "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8"). InnerVolumeSpecName "kube-api-access-7sr9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.614408 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" (UID: "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.673167 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-config-data" (OuterVolumeSpecName: "config-data") pod "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" (UID: "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.676803 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" (UID: "c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.684021 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.684187 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.684269 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sr9z\" (UniqueName: \"kubernetes.io/projected/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-kube-api-access-7sr9z\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.684347 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.684417 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.684487 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.684591 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.699434 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8","Type":"ContainerDied","Data":"ec4178f52b38ea782ec1c67e0b3c7e768a9c579c4e9577dddbae08cb9e136105"} Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.699482 4833 scope.go:117] "RemoveContainer" containerID="1ff8330995418f20e6a71c0eaa2b565d10db4946e2cd980f82365bc38b5f9f6c" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.699748 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.703491 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"df72920d-e022-48f9-b41c-f2fe6ed14da9","Type":"ContainerStarted","Data":"ed95caf0b41b8f0efd442cd468167be719adac6a91ff7cc90026b3cf33c670cf"} Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.718762 4833 scope.go:117] "RemoveContainer" containerID="890084e668006b412cde3e4304a29578481436109e36e60e90cdb7a4e522ae15" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.741785 4833 scope.go:117] "RemoveContainer" containerID="d1cd809acdad1a7ca6ab899ee8581bdb370ab1466b0d8dfd3345f39155c51d86" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.742455 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.121438876 podStartE2EDuration="11.742442913s" podCreationTimestamp="2026-02-19 13:06:05 +0000 UTC" firstStartedPulling="2026-02-19 13:06:06.554228329 +0000 UTC m=+1176.949747107" lastFinishedPulling="2026-02-19 13:06:16.175232376 +0000 UTC m=+1186.570751144" observedRunningTime="2026-02-19 13:06:16.726295149 +0000 UTC m=+1187.121813917" watchObservedRunningTime="2026-02-19 13:06:16.742442913 +0000 UTC m=+1187.137961681" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.753924 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.772215 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.781919 4833 scope.go:117] "RemoveContainer" containerID="6f784bd68afc71d989886878e0c43b47e29ee1433e0c84dd8daaca1d15bf022e" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.792911 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:16 crc kubenswrapper[4833]: E0219 13:06:16.793247 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="ceilometer-central-agent" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.793263 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="ceilometer-central-agent" Feb 19 13:06:16 crc kubenswrapper[4833]: E0219 13:06:16.793288 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="sg-core" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.793294 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="sg-core" Feb 19 13:06:16 crc kubenswrapper[4833]: E0219 13:06:16.793311 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="ceilometer-notification-agent" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.793317 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="ceilometer-notification-agent" Feb 19 13:06:16 crc kubenswrapper[4833]: E0219 13:06:16.793333 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="proxy-httpd" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.793339 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="proxy-httpd" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.793490 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="proxy-httpd" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.793528 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="ceilometer-notification-agent" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.793539 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="ceilometer-central-agent" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.793548 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" containerName="sg-core" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.795063 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.797293 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.797439 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.798223 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.888458 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-config-data\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.888534 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48mkb\" (UniqueName: \"kubernetes.io/projected/7213a0a5-92b6-4b30-93e7-21e24d4c7911-kube-api-access-48mkb\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.888564 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.888618 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.888660 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-scripts\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.888679 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-log-httpd\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.888696 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-run-httpd\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.990631 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-config-data\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.990946 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48mkb\" (UniqueName: \"kubernetes.io/projected/7213a0a5-92b6-4b30-93e7-21e24d4c7911-kube-api-access-48mkb\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.991048 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.991190 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.991335 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-scripts\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.991432 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-log-httpd\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.991539 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-run-httpd\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.992041 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-log-httpd\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.992162 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-run-httpd\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.995283 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.995793 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-config-data\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.996365 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:16 crc kubenswrapper[4833]: I0219 13:06:16.997214 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-scripts\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:17 crc kubenswrapper[4833]: I0219 13:06:17.009524 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48mkb\" (UniqueName: \"kubernetes.io/projected/7213a0a5-92b6-4b30-93e7-21e24d4c7911-kube-api-access-48mkb\") pod \"ceilometer-0\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " pod="openstack/ceilometer-0" Feb 19 13:06:17 crc kubenswrapper[4833]: I0219 13:06:17.149820 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:06:17 crc kubenswrapper[4833]: I0219 13:06:17.599878 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:17 crc kubenswrapper[4833]: I0219 13:06:17.711761 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7213a0a5-92b6-4b30-93e7-21e24d4c7911","Type":"ContainerStarted","Data":"ad3a645c2933ef7d1d19b00bfca1e522155f36dc5c9eda916e923b4ab1fc804f"} Feb 19 13:06:18 crc kubenswrapper[4833]: I0219 13:06:18.323633 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8" path="/var/lib/kubelet/pods/c73fa0cf-a9c2-4e45-a1b1-0bebf72377c8/volumes" Feb 19 13:06:18 crc kubenswrapper[4833]: I0219 13:06:18.329593 4833 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5bc667ffbb-qqgnx" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 19 13:06:18 crc kubenswrapper[4833]: I0219 13:06:18.329714 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:06:18 crc kubenswrapper[4833]: I0219 13:06:18.444935 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:18 crc kubenswrapper[4833]: I0219 13:06:18.741651 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7213a0a5-92b6-4b30-93e7-21e24d4c7911","Type":"ContainerStarted","Data":"d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0"} Feb 19 13:06:19 crc kubenswrapper[4833]: I0219 13:06:19.429837 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:19 crc kubenswrapper[4833]: I0219 13:06:19.433330 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-756fd4958c-8cv9q" Feb 19 13:06:19 crc kubenswrapper[4833]: I0219 13:06:19.751794 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7213a0a5-92b6-4b30-93e7-21e24d4c7911","Type":"ContainerStarted","Data":"ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f"} Feb 19 13:06:19 crc kubenswrapper[4833]: I0219 13:06:19.752110 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7213a0a5-92b6-4b30-93e7-21e24d4c7911","Type":"ContainerStarted","Data":"4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351"} Feb 19 13:06:19 crc kubenswrapper[4833]: I0219 13:06:19.953218 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:06:19 crc kubenswrapper[4833]: I0219 13:06:19.953464 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="da1e5208-5817-401e-bfbb-22088b43b335" containerName="glance-log" containerID="cri-o://db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5" gracePeriod=30 Feb 19 13:06:19 crc kubenswrapper[4833]: I0219 13:06:19.953586 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="da1e5208-5817-401e-bfbb-22088b43b335" containerName="glance-httpd" containerID="cri-o://c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a" gracePeriod=30 Feb 19 13:06:20 crc kubenswrapper[4833]: I0219 13:06:20.762874 4833 generic.go:334] "Generic (PLEG): container finished" podID="da1e5208-5817-401e-bfbb-22088b43b335" containerID="db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5" exitCode=143 Feb 19 13:06:20 crc kubenswrapper[4833]: I0219 13:06:20.762964 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da1e5208-5817-401e-bfbb-22088b43b335","Type":"ContainerDied","Data":"db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5"} Feb 19 13:06:20 crc kubenswrapper[4833]: I0219 13:06:20.803826 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75865f57f7-4q4h9" Feb 19 13:06:20 crc kubenswrapper[4833]: I0219 13:06:20.877824 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c68546898-xbhvb"] Feb 19 13:06:20 crc kubenswrapper[4833]: I0219 13:06:20.878043 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c68546898-xbhvb" podUID="8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" containerName="neutron-api" containerID="cri-o://b3b373fdaba482359d4a41c0b87a9aa0ac363f6d6c8f0b744a1e739cd266cfc2" gracePeriod=30 Feb 19 13:06:20 crc kubenswrapper[4833]: I0219 13:06:20.878435 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c68546898-xbhvb" podUID="8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" containerName="neutron-httpd" containerID="cri-o://c557dfc5ea7cd8c7844b22884524ffed34f0427c6b016924b3037f3635629ef6" gracePeriod=30 Feb 19 13:06:21 crc kubenswrapper[4833]: I0219 13:06:21.772486 4833 generic.go:334] "Generic (PLEG): container finished" podID="8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" containerID="c557dfc5ea7cd8c7844b22884524ffed34f0427c6b016924b3037f3635629ef6" exitCode=0 Feb 19 13:06:21 crc kubenswrapper[4833]: I0219 13:06:21.772546 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c68546898-xbhvb" event={"ID":"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd","Type":"ContainerDied","Data":"c557dfc5ea7cd8c7844b22884524ffed34f0427c6b016924b3037f3635629ef6"} Feb 19 13:06:22 crc kubenswrapper[4833]: I0219 13:06:22.784024 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7213a0a5-92b6-4b30-93e7-21e24d4c7911","Type":"ContainerStarted","Data":"1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc"} Feb 19 13:06:22 crc kubenswrapper[4833]: I0219 13:06:22.784213 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="ceilometer-central-agent" containerID="cri-o://d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0" gracePeriod=30 Feb 19 13:06:22 crc kubenswrapper[4833]: I0219 13:06:22.784278 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="proxy-httpd" containerID="cri-o://1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc" gracePeriod=30 Feb 19 13:06:22 crc kubenswrapper[4833]: I0219 13:06:22.784339 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="sg-core" containerID="cri-o://ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f" gracePeriod=30 Feb 19 13:06:22 crc kubenswrapper[4833]: I0219 13:06:22.784402 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="ceilometer-notification-agent" containerID="cri-o://4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351" gracePeriod=30 Feb 19 13:06:22 crc kubenswrapper[4833]: I0219 13:06:22.784543 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:06:22 crc kubenswrapper[4833]: I0219 13:06:22.822713 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.478748124 podStartE2EDuration="6.822688517s" podCreationTimestamp="2026-02-19 13:06:16 +0000 UTC" firstStartedPulling="2026-02-19 13:06:17.604129985 +0000 UTC m=+1187.999648753" lastFinishedPulling="2026-02-19 13:06:21.948070378 +0000 UTC m=+1192.343589146" observedRunningTime="2026-02-19 13:06:22.813473929 +0000 UTC m=+1193.208992697" watchObservedRunningTime="2026-02-19 13:06:22.822688517 +0000 UTC m=+1193.218207295" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.633317 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.705911 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-internal-tls-certs\") pod \"da1e5208-5817-401e-bfbb-22088b43b335\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.706003 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-scripts\") pod \"da1e5208-5817-401e-bfbb-22088b43b335\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.706026 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlt2z\" (UniqueName: \"kubernetes.io/projected/da1e5208-5817-401e-bfbb-22088b43b335-kube-api-access-zlt2z\") pod \"da1e5208-5817-401e-bfbb-22088b43b335\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.706134 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-logs\") pod \"da1e5208-5817-401e-bfbb-22088b43b335\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.706161 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-httpd-run\") pod \"da1e5208-5817-401e-bfbb-22088b43b335\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.706202 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"da1e5208-5817-401e-bfbb-22088b43b335\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.706252 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-config-data\") pod \"da1e5208-5817-401e-bfbb-22088b43b335\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.706268 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-combined-ca-bundle\") pod \"da1e5208-5817-401e-bfbb-22088b43b335\" (UID: \"da1e5208-5817-401e-bfbb-22088b43b335\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.708879 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "da1e5208-5817-401e-bfbb-22088b43b335" (UID: "da1e5208-5817-401e-bfbb-22088b43b335"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.709800 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-logs" (OuterVolumeSpecName: "logs") pod "da1e5208-5817-401e-bfbb-22088b43b335" (UID: "da1e5208-5817-401e-bfbb-22088b43b335"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.715340 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "da1e5208-5817-401e-bfbb-22088b43b335" (UID: "da1e5208-5817-401e-bfbb-22088b43b335"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.723828 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da1e5208-5817-401e-bfbb-22088b43b335-kube-api-access-zlt2z" (OuterVolumeSpecName: "kube-api-access-zlt2z") pod "da1e5208-5817-401e-bfbb-22088b43b335" (UID: "da1e5208-5817-401e-bfbb-22088b43b335"). InnerVolumeSpecName "kube-api-access-zlt2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.723985 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-scripts" (OuterVolumeSpecName: "scripts") pod "da1e5208-5817-401e-bfbb-22088b43b335" (UID: "da1e5208-5817-401e-bfbb-22088b43b335"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.740273 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da1e5208-5817-401e-bfbb-22088b43b335" (UID: "da1e5208-5817-401e-bfbb-22088b43b335"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.779783 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.783600 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "da1e5208-5817-401e-bfbb-22088b43b335" (UID: "da1e5208-5817-401e-bfbb-22088b43b335"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.793670 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-config-data" (OuterVolumeSpecName: "config-data") pod "da1e5208-5817-401e-bfbb-22088b43b335" (UID: "da1e5208-5817-401e-bfbb-22088b43b335"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.811766 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.811801 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.811813 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.811823 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.811831 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/da1e5208-5817-401e-bfbb-22088b43b335-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.811843 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlt2z\" (UniqueName: \"kubernetes.io/projected/da1e5208-5817-401e-bfbb-22088b43b335-kube-api-access-zlt2z\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.811851 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.811859 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/da1e5208-5817-401e-bfbb-22088b43b335-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.815690 4833 generic.go:334] "Generic (PLEG): container finished" podID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerID="7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4" exitCode=137 Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.815749 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bc667ffbb-qqgnx" event={"ID":"bbfe4179-53a2-4a74-9045-7a498c9aad70","Type":"ContainerDied","Data":"7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4"} Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.815775 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5bc667ffbb-qqgnx" event={"ID":"bbfe4179-53a2-4a74-9045-7a498c9aad70","Type":"ContainerDied","Data":"a80acf5d9b9ba7656c767add6059ad421d16a2f6ceb88f7c50fce0faad400c90"} Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.815791 4833 scope.go:117] "RemoveContainer" containerID="da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.815900 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5bc667ffbb-qqgnx" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.818674 4833 generic.go:334] "Generic (PLEG): container finished" podID="da1e5208-5817-401e-bfbb-22088b43b335" containerID="c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a" exitCode=0 Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.818715 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da1e5208-5817-401e-bfbb-22088b43b335","Type":"ContainerDied","Data":"c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a"} Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.818731 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"da1e5208-5817-401e-bfbb-22088b43b335","Type":"ContainerDied","Data":"3861702f02b0ffcdfc8bcda55dd24c5496d69f67743b92ab2b2f57746892751d"} Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.818770 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.824588 4833 generic.go:334] "Generic (PLEG): container finished" podID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerID="1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc" exitCode=0 Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.824638 4833 generic.go:334] "Generic (PLEG): container finished" podID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerID="ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f" exitCode=2 Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.824647 4833 generic.go:334] "Generic (PLEG): container finished" podID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerID="4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351" exitCode=0 Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.824666 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7213a0a5-92b6-4b30-93e7-21e24d4c7911","Type":"ContainerDied","Data":"1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc"} Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.824688 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7213a0a5-92b6-4b30-93e7-21e24d4c7911","Type":"ContainerDied","Data":"ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f"} Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.824717 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7213a0a5-92b6-4b30-93e7-21e24d4c7911","Type":"ContainerDied","Data":"4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351"} Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.830629 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.889082 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.899695 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.940454 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-config-data\") pod \"bbfe4179-53a2-4a74-9045-7a498c9aad70\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.940510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcjt9\" (UniqueName: \"kubernetes.io/projected/bbfe4179-53a2-4a74-9045-7a498c9aad70-kube-api-access-fcjt9\") pod \"bbfe4179-53a2-4a74-9045-7a498c9aad70\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.940690 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-scripts\") pod \"bbfe4179-53a2-4a74-9045-7a498c9aad70\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.940770 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfe4179-53a2-4a74-9045-7a498c9aad70-logs\") pod \"bbfe4179-53a2-4a74-9045-7a498c9aad70\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.940798 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-combined-ca-bundle\") pod \"bbfe4179-53a2-4a74-9045-7a498c9aad70\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.940828 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-tls-certs\") pod \"bbfe4179-53a2-4a74-9045-7a498c9aad70\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.940860 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-secret-key\") pod \"bbfe4179-53a2-4a74-9045-7a498c9aad70\" (UID: \"bbfe4179-53a2-4a74-9045-7a498c9aad70\") " Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.941225 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.945843 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbfe4179-53a2-4a74-9045-7a498c9aad70-logs" (OuterVolumeSpecName: "logs") pod "bbfe4179-53a2-4a74-9045-7a498c9aad70" (UID: "bbfe4179-53a2-4a74-9045-7a498c9aad70"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.953674 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "bbfe4179-53a2-4a74-9045-7a498c9aad70" (UID: "bbfe4179-53a2-4a74-9045-7a498c9aad70"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.954640 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbfe4179-53a2-4a74-9045-7a498c9aad70-kube-api-access-fcjt9" (OuterVolumeSpecName: "kube-api-access-fcjt9") pod "bbfe4179-53a2-4a74-9045-7a498c9aad70" (UID: "bbfe4179-53a2-4a74-9045-7a498c9aad70"). InnerVolumeSpecName "kube-api-access-fcjt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.965992 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:06:23 crc kubenswrapper[4833]: E0219 13:06:23.966376 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon-log" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.966388 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon-log" Feb 19 13:06:23 crc kubenswrapper[4833]: E0219 13:06:23.966400 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da1e5208-5817-401e-bfbb-22088b43b335" containerName="glance-httpd" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.966406 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="da1e5208-5817-401e-bfbb-22088b43b335" containerName="glance-httpd" Feb 19 13:06:23 crc kubenswrapper[4833]: E0219 13:06:23.966418 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.966425 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon" Feb 19 13:06:23 crc kubenswrapper[4833]: E0219 13:06:23.966441 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da1e5208-5817-401e-bfbb-22088b43b335" containerName="glance-log" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.966447 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="da1e5208-5817-401e-bfbb-22088b43b335" containerName="glance-log" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.966615 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.966628 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" containerName="horizon-log" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.966636 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="da1e5208-5817-401e-bfbb-22088b43b335" containerName="glance-log" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.966646 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="da1e5208-5817-401e-bfbb-22088b43b335" containerName="glance-httpd" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.978661 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.982795 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.983473 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.988463 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-config-data" (OuterVolumeSpecName: "config-data") pod "bbfe4179-53a2-4a74-9045-7a498c9aad70" (UID: "bbfe4179-53a2-4a74-9045-7a498c9aad70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:06:23 crc kubenswrapper[4833]: I0219 13:06:23.999463 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.010774 4833 scope.go:117] "RemoveContainer" containerID="7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.011101 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-scripts" (OuterVolumeSpecName: "scripts") pod "bbfe4179-53a2-4a74-9045-7a498c9aad70" (UID: "bbfe4179-53a2-4a74-9045-7a498c9aad70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.015849 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "bbfe4179-53a2-4a74-9045-7a498c9aad70" (UID: "bbfe4179-53a2-4a74-9045-7a498c9aad70"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.021457 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbfe4179-53a2-4a74-9045-7a498c9aad70" (UID: "bbfe4179-53a2-4a74-9045-7a498c9aad70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.043739 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.043770 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.043782 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcjt9\" (UniqueName: \"kubernetes.io/projected/bbfe4179-53a2-4a74-9045-7a498c9aad70-kube-api-access-fcjt9\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.043791 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bbfe4179-53a2-4a74-9045-7a498c9aad70-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.043799 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbfe4179-53a2-4a74-9045-7a498c9aad70-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.043807 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.043816 4833 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfe4179-53a2-4a74-9045-7a498c9aad70-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.044225 4833 scope.go:117] "RemoveContainer" containerID="da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c" Feb 19 13:06:24 crc kubenswrapper[4833]: E0219 13:06:24.044608 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c\": container with ID starting with da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c not found: ID does not exist" containerID="da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.044635 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c"} err="failed to get container status \"da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c\": rpc error: code = NotFound desc = could not find container \"da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c\": container with ID starting with da885bfb26f991009fe0b1720edeffac674904d13ac1face22d0ea347803793c not found: ID does not exist" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.044655 4833 scope.go:117] "RemoveContainer" containerID="7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4" Feb 19 13:06:24 crc kubenswrapper[4833]: E0219 13:06:24.048816 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4\": container with ID starting with 7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4 not found: ID does not exist" containerID="7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.048844 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4"} err="failed to get container status \"7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4\": rpc error: code = NotFound desc = could not find container \"7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4\": container with ID starting with 7c2c0e64de50f0dccba6e9b4398327ba4df4280f15a675770e28fa45c64968c4 not found: ID does not exist" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.048861 4833 scope.go:117] "RemoveContainer" containerID="c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.070779 4833 scope.go:117] "RemoveContainer" containerID="db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.104312 4833 scope.go:117] "RemoveContainer" containerID="c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a" Feb 19 13:06:24 crc kubenswrapper[4833]: E0219 13:06:24.104796 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a\": container with ID starting with c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a not found: ID does not exist" containerID="c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.104857 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a"} err="failed to get container status \"c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a\": rpc error: code = NotFound desc = could not find container \"c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a\": container with ID starting with c03d5be704808c019046749b4026d6bb6247b02ad09a83db981577c992edda1a not found: ID does not exist" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.104888 4833 scope.go:117] "RemoveContainer" containerID="db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5" Feb 19 13:06:24 crc kubenswrapper[4833]: E0219 13:06:24.108122 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5\": container with ID starting with db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5 not found: ID does not exist" containerID="db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.108161 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5"} err="failed to get container status \"db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5\": rpc error: code = NotFound desc = could not find container \"db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5\": container with ID starting with db109be88fe501c09010be9d5c238b4baa636b5652811d6eaca6c6afe3e03ea5 not found: ID does not exist" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.145665 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.145785 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.145842 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.145873 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f177d83-63c7-433e-aeb0-e8a91b6216f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.145915 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.145936 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqgwr\" (UniqueName: \"kubernetes.io/projected/0f177d83-63c7-433e-aeb0-e8a91b6216f8-kube-api-access-mqgwr\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.145992 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f177d83-63c7-433e-aeb0-e8a91b6216f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.146016 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.147900 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5bc667ffbb-qqgnx"] Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.155596 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5bc667ffbb-qqgnx"] Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.247179 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f177d83-63c7-433e-aeb0-e8a91b6216f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.247230 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.247278 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.247330 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.247368 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.247390 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f177d83-63c7-433e-aeb0-e8a91b6216f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.247423 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.247447 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqgwr\" (UniqueName: \"kubernetes.io/projected/0f177d83-63c7-433e-aeb0-e8a91b6216f8-kube-api-access-mqgwr\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.247698 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0f177d83-63c7-433e-aeb0-e8a91b6216f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.247835 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.247963 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0f177d83-63c7-433e-aeb0-e8a91b6216f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.253157 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.253376 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.255020 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.278567 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqgwr\" (UniqueName: \"kubernetes.io/projected/0f177d83-63c7-433e-aeb0-e8a91b6216f8-kube-api-access-mqgwr\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.279132 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f177d83-63c7-433e-aeb0-e8a91b6216f8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.297247 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"0f177d83-63c7-433e-aeb0-e8a91b6216f8\") " pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.324676 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbfe4179-53a2-4a74-9045-7a498c9aad70" path="/var/lib/kubelet/pods/bbfe4179-53a2-4a74-9045-7a498c9aad70/volumes" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.325343 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da1e5208-5817-401e-bfbb-22088b43b335" path="/var/lib/kubelet/pods/da1e5208-5817-401e-bfbb-22088b43b335/volumes" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.329607 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.789931 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.790184 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" containerName="glance-log" containerID="cri-o://c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91" gracePeriod=30 Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.790240 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" containerName="glance-httpd" containerID="cri-o://a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca" gracePeriod=30 Feb 19 13:06:24 crc kubenswrapper[4833]: I0219 13:06:24.956156 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 19 13:06:25 crc kubenswrapper[4833]: I0219 13:06:25.841665 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0f177d83-63c7-433e-aeb0-e8a91b6216f8","Type":"ContainerStarted","Data":"8d7aa531ff35249c6d753c9f9939637ff6ab9366d29bf8e531ef85c9122f86ce"} Feb 19 13:06:25 crc kubenswrapper[4833]: I0219 13:06:25.844347 4833 generic.go:334] "Generic (PLEG): container finished" podID="dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" containerID="c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91" exitCode=143 Feb 19 13:06:25 crc kubenswrapper[4833]: I0219 13:06:25.844432 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b","Type":"ContainerDied","Data":"c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91"} Feb 19 13:06:26 crc kubenswrapper[4833]: I0219 13:06:26.855756 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0f177d83-63c7-433e-aeb0-e8a91b6216f8","Type":"ContainerStarted","Data":"4e1a7ad610d5a2ea8512d2ee99a006d43260cd5b45f25d7a6c2b2fcf109ec64d"} Feb 19 13:06:27 crc kubenswrapper[4833]: I0219 13:06:27.865664 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0f177d83-63c7-433e-aeb0-e8a91b6216f8","Type":"ContainerStarted","Data":"1b14cc030059cae7efff125a47e97b7d9f0d250c695ea58e221e673a5b53991c"} Feb 19 13:06:27 crc kubenswrapper[4833]: I0219 13:06:27.883704 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.883683474 podStartE2EDuration="4.883683474s" podCreationTimestamp="2026-02-19 13:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:06:27.881874166 +0000 UTC m=+1198.277392934" watchObservedRunningTime="2026-02-19 13:06:27.883683474 +0000 UTC m=+1198.279202242" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.795658 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.829428 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-httpd-run\") pod \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.829987 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" (UID: "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.830836 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.830888 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-scripts\") pod \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.830914 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-public-tls-certs\") pod \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.830943 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-config-data\") pod \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.830994 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt665\" (UniqueName: \"kubernetes.io/projected/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-kube-api-access-rt665\") pod \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.831032 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-logs\") pod \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.831098 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-combined-ca-bundle\") pod \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\" (UID: \"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b\") " Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.831812 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.832602 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-logs" (OuterVolumeSpecName: "logs") pod "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" (UID: "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.840052 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-kube-api-access-rt665" (OuterVolumeSpecName: "kube-api-access-rt665") pod "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" (UID: "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b"). InnerVolumeSpecName "kube-api-access-rt665". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.842931 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-scripts" (OuterVolumeSpecName: "scripts") pod "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" (UID: "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.853726 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" (UID: "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.875790 4833 generic.go:334] "Generic (PLEG): container finished" podID="dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" containerID="a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca" exitCode=0 Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.875945 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b","Type":"ContainerDied","Data":"a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca"} Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.875996 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"dc02fc0d-426e-41b7-a8c4-8f5aed508b6b","Type":"ContainerDied","Data":"f5a6e1e0cf6ac6bd95d7e217f2ef7b948c18699b6154ef7178717fe9821a6597"} Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.876013 4833 scope.go:117] "RemoveContainer" containerID="a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.876165 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.882606 4833 generic.go:334] "Generic (PLEG): container finished" podID="8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" containerID="b3b373fdaba482359d4a41c0b87a9aa0ac363f6d6c8f0b744a1e739cd266cfc2" exitCode=0 Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.883568 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c68546898-xbhvb" event={"ID":"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd","Type":"ContainerDied","Data":"b3b373fdaba482359d4a41c0b87a9aa0ac363f6d6c8f0b744a1e739cd266cfc2"} Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.888904 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" (UID: "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.916247 4833 scope.go:117] "RemoveContainer" containerID="c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.922092 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" (UID: "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.933703 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-config-data" (OuterVolumeSpecName: "config-data") pod "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" (UID: "dc02fc0d-426e-41b7-a8c4-8f5aed508b6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.937046 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.937080 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.937110 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.937121 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.937130 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rt665\" (UniqueName: \"kubernetes.io/projected/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-kube-api-access-rt665\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.937138 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.937146 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.955711 4833 scope.go:117] "RemoveContainer" containerID="a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca" Feb 19 13:06:28 crc kubenswrapper[4833]: E0219 13:06:28.956232 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca\": container with ID starting with a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca not found: ID does not exist" containerID="a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.956277 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca"} err="failed to get container status \"a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca\": rpc error: code = NotFound desc = could not find container \"a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca\": container with ID starting with a38c6d10b1345839fc705ee714bda676f48e32605d9b2a1f828098a8640318ca not found: ID does not exist" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.956302 4833 scope.go:117] "RemoveContainer" containerID="c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91" Feb 19 13:06:28 crc kubenswrapper[4833]: E0219 13:06:28.957479 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91\": container with ID starting with c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91 not found: ID does not exist" containerID="c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.957538 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91"} err="failed to get container status \"c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91\": rpc error: code = NotFound desc = could not find container \"c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91\": container with ID starting with c3c5299ffd2a0d9db03fe85db617f6c5a38862390df2c2ad828aa2da22d9db91 not found: ID does not exist" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.961348 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 19 13:06:28 crc kubenswrapper[4833]: I0219 13:06:28.970424 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.037861 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-config\") pod \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.037974 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkxgm\" (UniqueName: \"kubernetes.io/projected/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-kube-api-access-tkxgm\") pod \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.038111 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-ovndb-tls-certs\") pod \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.038152 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-httpd-config\") pod \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.038233 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-combined-ca-bundle\") pod \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\" (UID: \"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd\") " Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.038621 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.041264 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-kube-api-access-tkxgm" (OuterVolumeSpecName: "kube-api-access-tkxgm") pod "8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" (UID: "8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd"). InnerVolumeSpecName "kube-api-access-tkxgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.041697 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" (UID: "8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.080557 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-config" (OuterVolumeSpecName: "config") pod "8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" (UID: "8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.087101 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" (UID: "8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.105946 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" (UID: "8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.140593 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.140638 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.140653 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkxgm\" (UniqueName: \"kubernetes.io/projected/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-kube-api-access-tkxgm\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.140666 4833 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.140677 4833 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.213685 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.220433 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.243588 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:06:29 crc kubenswrapper[4833]: E0219 13:06:29.245645 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" containerName="neutron-httpd" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.245664 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" containerName="neutron-httpd" Feb 19 13:06:29 crc kubenswrapper[4833]: E0219 13:06:29.245706 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" containerName="glance-httpd" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.245715 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" containerName="glance-httpd" Feb 19 13:06:29 crc kubenswrapper[4833]: E0219 13:06:29.245733 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" containerName="glance-log" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.245740 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" containerName="glance-log" Feb 19 13:06:29 crc kubenswrapper[4833]: E0219 13:06:29.245759 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" containerName="neutron-api" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.245765 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" containerName="neutron-api" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.247522 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" containerName="neutron-api" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.247543 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" containerName="glance-httpd" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.247562 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" containerName="neutron-httpd" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.247595 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" containerName="glance-log" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.267144 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.270372 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.288164 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.288255 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.346666 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.346745 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.346769 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.346828 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-logs\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.346957 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.347015 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c442m\" (UniqueName: \"kubernetes.io/projected/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-kube-api-access-c442m\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.347144 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.347360 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.448618 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.448660 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.448680 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.448706 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-logs\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.448733 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.448934 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c442m\" (UniqueName: \"kubernetes.io/projected/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-kube-api-access-c442m\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.449029 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.449387 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.449525 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.449863 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-logs\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.450472 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.455666 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.456425 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.456795 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.457118 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.477715 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c442m\" (UniqueName: \"kubernetes.io/projected/42dcfe39-3d5b-4e0a-8b07-658ec7f665ba-kube-api-access-c442m\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.478771 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba\") " pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.611477 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.898431 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c68546898-xbhvb" event={"ID":"8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd","Type":"ContainerDied","Data":"501d5127b11a1c453c17c1d20db4a6ee38463124af7d915516dfb70ed3e52856"} Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.898550 4833 scope.go:117] "RemoveContainer" containerID="c557dfc5ea7cd8c7844b22884524ffed34f0427c6b016924b3037f3635629ef6" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.898845 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c68546898-xbhvb" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.920525 4833 scope.go:117] "RemoveContainer" containerID="b3b373fdaba482359d4a41c0b87a9aa0ac363f6d6c8f0b744a1e739cd266cfc2" Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.960450 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c68546898-xbhvb"] Feb 19 13:06:29 crc kubenswrapper[4833]: I0219 13:06:29.966885 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c68546898-xbhvb"] Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.151854 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 19 13:06:30 crc kubenswrapper[4833]: W0219 13:06:30.152298 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42dcfe39_3d5b_4e0a_8b07_658ec7f665ba.slice/crio-756ce9dd4c337d8a514c17087ee12098ce61cdde415f5d7d26fc41bb54ece23f WatchSource:0}: Error finding container 756ce9dd4c337d8a514c17087ee12098ce61cdde415f5d7d26fc41bb54ece23f: Status 404 returned error can't find the container with id 756ce9dd4c337d8a514c17087ee12098ce61cdde415f5d7d26fc41bb54ece23f Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.329019 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd" path="/var/lib/kubelet/pods/8e7a995c-7ce5-4685-a4d3-ec7c4cb807dd/volumes" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.329931 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc02fc0d-426e-41b7-a8c4-8f5aed508b6b" path="/var/lib/kubelet/pods/dc02fc0d-426e-41b7-a8c4-8f5aed508b6b/volumes" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.678544 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.771901 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-sg-core-conf-yaml\") pod \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.772231 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-scripts\") pod \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.772394 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-log-httpd\") pod \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.772511 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-config-data\") pod \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.772583 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48mkb\" (UniqueName: \"kubernetes.io/projected/7213a0a5-92b6-4b30-93e7-21e24d4c7911-kube-api-access-48mkb\") pod \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.772654 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-run-httpd\") pod \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.772729 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-combined-ca-bundle\") pod \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\" (UID: \"7213a0a5-92b6-4b30-93e7-21e24d4c7911\") " Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.777319 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7213a0a5-92b6-4b30-93e7-21e24d4c7911" (UID: "7213a0a5-92b6-4b30-93e7-21e24d4c7911"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.785490 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7213a0a5-92b6-4b30-93e7-21e24d4c7911" (UID: "7213a0a5-92b6-4b30-93e7-21e24d4c7911"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.823919 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7213a0a5-92b6-4b30-93e7-21e24d4c7911-kube-api-access-48mkb" (OuterVolumeSpecName: "kube-api-access-48mkb") pod "7213a0a5-92b6-4b30-93e7-21e24d4c7911" (UID: "7213a0a5-92b6-4b30-93e7-21e24d4c7911"). InnerVolumeSpecName "kube-api-access-48mkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.824008 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-scripts" (OuterVolumeSpecName: "scripts") pod "7213a0a5-92b6-4b30-93e7-21e24d4c7911" (UID: "7213a0a5-92b6-4b30-93e7-21e24d4c7911"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.830971 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7213a0a5-92b6-4b30-93e7-21e24d4c7911" (UID: "7213a0a5-92b6-4b30-93e7-21e24d4c7911"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.909551 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.909585 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48mkb\" (UniqueName: \"kubernetes.io/projected/7213a0a5-92b6-4b30-93e7-21e24d4c7911-kube-api-access-48mkb\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.911377 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba","Type":"ContainerStarted","Data":"892ede5f2839e654616e38a1d7cc106b6ea41b9ab813c314ef78e824a05882fa"} Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.911415 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba","Type":"ContainerStarted","Data":"756ce9dd4c337d8a514c17087ee12098ce61cdde415f5d7d26fc41bb54ece23f"} Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.911614 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7213a0a5-92b6-4b30-93e7-21e24d4c7911-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.924888 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.924934 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.931859 4833 generic.go:334] "Generic (PLEG): container finished" podID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerID="d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0" exitCode=0 Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.932092 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.936071 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7213a0a5-92b6-4b30-93e7-21e24d4c7911","Type":"ContainerDied","Data":"d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0"} Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.936130 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7213a0a5-92b6-4b30-93e7-21e24d4c7911","Type":"ContainerDied","Data":"ad3a645c2933ef7d1d19b00bfca1e522155f36dc5c9eda916e923b4ab1fc804f"} Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.936151 4833 scope.go:117] "RemoveContainer" containerID="1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.982645 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-config-data" (OuterVolumeSpecName: "config-data") pod "7213a0a5-92b6-4b30-93e7-21e24d4c7911" (UID: "7213a0a5-92b6-4b30-93e7-21e24d4c7911"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.988876 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7213a0a5-92b6-4b30-93e7-21e24d4c7911" (UID: "7213a0a5-92b6-4b30-93e7-21e24d4c7911"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:30 crc kubenswrapper[4833]: I0219 13:06:30.991903 4833 scope.go:117] "RemoveContainer" containerID="ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.017450 4833 scope.go:117] "RemoveContainer" containerID="4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.026327 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.026351 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7213a0a5-92b6-4b30-93e7-21e24d4c7911-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.040879 4833 scope.go:117] "RemoveContainer" containerID="d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.063996 4833 scope.go:117] "RemoveContainer" containerID="1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc" Feb 19 13:06:31 crc kubenswrapper[4833]: E0219 13:06:31.065785 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc\": container with ID starting with 1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc not found: ID does not exist" containerID="1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.065845 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc"} err="failed to get container status \"1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc\": rpc error: code = NotFound desc = could not find container \"1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc\": container with ID starting with 1c9a620015677dd515b3f75d36c786b4930ccf6cd97d91fb914cf50176a708fc not found: ID does not exist" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.065944 4833 scope.go:117] "RemoveContainer" containerID="ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f" Feb 19 13:06:31 crc kubenswrapper[4833]: E0219 13:06:31.066230 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f\": container with ID starting with ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f not found: ID does not exist" containerID="ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.066265 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f"} err="failed to get container status \"ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f\": rpc error: code = NotFound desc = could not find container \"ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f\": container with ID starting with ddf8ec2b3e269b6254e117609e3d98e37da2c647e1cea984d4c10eb34551122f not found: ID does not exist" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.066288 4833 scope.go:117] "RemoveContainer" containerID="4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351" Feb 19 13:06:31 crc kubenswrapper[4833]: E0219 13:06:31.066507 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351\": container with ID starting with 4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351 not found: ID does not exist" containerID="4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.066528 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351"} err="failed to get container status \"4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351\": rpc error: code = NotFound desc = could not find container \"4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351\": container with ID starting with 4c78bcde467e4bfb672a8b472bd6421a8730ef1c251e943d595fb98ee3098351 not found: ID does not exist" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.066542 4833 scope.go:117] "RemoveContainer" containerID="d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0" Feb 19 13:06:31 crc kubenswrapper[4833]: E0219 13:06:31.066803 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0\": container with ID starting with d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0 not found: ID does not exist" containerID="d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.066830 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0"} err="failed to get container status \"d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0\": rpc error: code = NotFound desc = could not find container \"d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0\": container with ID starting with d1d59793814c4957cabb63065b053bfe1c4e48fd96e8ce3845dc0a67276f1fa0 not found: ID does not exist" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.265291 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.282821 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.292764 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:31 crc kubenswrapper[4833]: E0219 13:06:31.293251 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="sg-core" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.293269 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="sg-core" Feb 19 13:06:31 crc kubenswrapper[4833]: E0219 13:06:31.293284 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="proxy-httpd" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.293292 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="proxy-httpd" Feb 19 13:06:31 crc kubenswrapper[4833]: E0219 13:06:31.293302 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="ceilometer-notification-agent" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.293312 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="ceilometer-notification-agent" Feb 19 13:06:31 crc kubenswrapper[4833]: E0219 13:06:31.293330 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="ceilometer-central-agent" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.293340 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="ceilometer-central-agent" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.293571 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="ceilometer-central-agent" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.293596 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="sg-core" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.293613 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="ceilometer-notification-agent" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.293625 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" containerName="proxy-httpd" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.295432 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.301137 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.301365 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.307262 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.435676 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-log-httpd\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.435742 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.435768 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnfhv\" (UniqueName: \"kubernetes.io/projected/87138863-564a-46e7-80b0-eef5a19d7977-kube-api-access-nnfhv\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.435799 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-config-data\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.435878 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.435904 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-scripts\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.435953 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-run-httpd\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.537839 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.537928 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-scripts\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.538011 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-run-httpd\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.538152 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-log-httpd\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.538229 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.538268 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnfhv\" (UniqueName: \"kubernetes.io/projected/87138863-564a-46e7-80b0-eef5a19d7977-kube-api-access-nnfhv\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.538324 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-config-data\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.538912 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-run-httpd\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.539484 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-log-httpd\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.543179 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.543595 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.543608 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-scripts\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.544518 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-config-data\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.557314 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnfhv\" (UniqueName: \"kubernetes.io/projected/87138863-564a-46e7-80b0-eef5a19d7977-kube-api-access-nnfhv\") pod \"ceilometer-0\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.621480 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:06:31 crc kubenswrapper[4833]: I0219 13:06:31.971570 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"42dcfe39-3d5b-4e0a-8b07-658ec7f665ba","Type":"ContainerStarted","Data":"c52512af057104cdf404520dd1390a3db9a38ffbc55e7ff1a6ffeba5508e5298"} Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.056490 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.056466617 podStartE2EDuration="3.056466617s" podCreationTimestamp="2026-02-19 13:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:06:31.99482772 +0000 UTC m=+1202.390346488" watchObservedRunningTime="2026-02-19 13:06:32.056466617 +0000 UTC m=+1202.451985385" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.057078 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:06:32 crc kubenswrapper[4833]: W0219 13:06:32.057988 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87138863_564a_46e7_80b0_eef5a19d7977.slice/crio-bd6851d2ab750cda050f2847a87de5ae3627186835976cb8c5cbe19d98e15e21 WatchSource:0}: Error finding container bd6851d2ab750cda050f2847a87de5ae3627186835976cb8c5cbe19d98e15e21: Status 404 returned error can't find the container with id bd6851d2ab750cda050f2847a87de5ae3627186835976cb8c5cbe19d98e15e21 Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.187310 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2j7hx"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.188751 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2j7hx" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.198860 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2j7hx"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.252353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9h8v\" (UniqueName: \"kubernetes.io/projected/f4d96078-4bfc-49d9-b70b-1be6d9b29558-kube-api-access-k9h8v\") pod \"nova-api-db-create-2j7hx\" (UID: \"f4d96078-4bfc-49d9-b70b-1be6d9b29558\") " pod="openstack/nova-api-db-create-2j7hx" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.252631 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4d96078-4bfc-49d9-b70b-1be6d9b29558-operator-scripts\") pod \"nova-api-db-create-2j7hx\" (UID: \"f4d96078-4bfc-49d9-b70b-1be6d9b29558\") " pod="openstack/nova-api-db-create-2j7hx" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.281587 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-slhnv"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.283007 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-slhnv" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.296942 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-slhnv"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.329429 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7213a0a5-92b6-4b30-93e7-21e24d4c7911" path="/var/lib/kubelet/pods/7213a0a5-92b6-4b30-93e7-21e24d4c7911/volumes" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.354647 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x45kr\" (UniqueName: \"kubernetes.io/projected/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-kube-api-access-x45kr\") pod \"nova-cell0-db-create-slhnv\" (UID: \"bdd856e6-ba47-4660-b0b1-7e6202b97bb5\") " pod="openstack/nova-cell0-db-create-slhnv" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.354717 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4d96078-4bfc-49d9-b70b-1be6d9b29558-operator-scripts\") pod \"nova-api-db-create-2j7hx\" (UID: \"f4d96078-4bfc-49d9-b70b-1be6d9b29558\") " pod="openstack/nova-api-db-create-2j7hx" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.354761 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-operator-scripts\") pod \"nova-cell0-db-create-slhnv\" (UID: \"bdd856e6-ba47-4660-b0b1-7e6202b97bb5\") " pod="openstack/nova-cell0-db-create-slhnv" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.354869 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9h8v\" (UniqueName: \"kubernetes.io/projected/f4d96078-4bfc-49d9-b70b-1be6d9b29558-kube-api-access-k9h8v\") pod \"nova-api-db-create-2j7hx\" (UID: \"f4d96078-4bfc-49d9-b70b-1be6d9b29558\") " pod="openstack/nova-api-db-create-2j7hx" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.355843 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4d96078-4bfc-49d9-b70b-1be6d9b29558-operator-scripts\") pod \"nova-api-db-create-2j7hx\" (UID: \"f4d96078-4bfc-49d9-b70b-1be6d9b29558\") " pod="openstack/nova-api-db-create-2j7hx" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.375252 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9h8v\" (UniqueName: \"kubernetes.io/projected/f4d96078-4bfc-49d9-b70b-1be6d9b29558-kube-api-access-k9h8v\") pod \"nova-api-db-create-2j7hx\" (UID: \"f4d96078-4bfc-49d9-b70b-1be6d9b29558\") " pod="openstack/nova-api-db-create-2j7hx" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.386817 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d050-account-create-update-4w8nt"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.387820 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d050-account-create-update-4w8nt" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.391236 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.401058 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d050-account-create-update-4w8nt"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.456428 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x45kr\" (UniqueName: \"kubernetes.io/projected/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-kube-api-access-x45kr\") pod \"nova-cell0-db-create-slhnv\" (UID: \"bdd856e6-ba47-4660-b0b1-7e6202b97bb5\") " pod="openstack/nova-cell0-db-create-slhnv" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.456648 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265af042-f6c4-4d9c-90a9-ae5305c7951e-operator-scripts\") pod \"nova-api-d050-account-create-update-4w8nt\" (UID: \"265af042-f6c4-4d9c-90a9-ae5305c7951e\") " pod="openstack/nova-api-d050-account-create-update-4w8nt" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.456766 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-operator-scripts\") pod \"nova-cell0-db-create-slhnv\" (UID: \"bdd856e6-ba47-4660-b0b1-7e6202b97bb5\") " pod="openstack/nova-cell0-db-create-slhnv" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.457606 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzghx\" (UniqueName: \"kubernetes.io/projected/265af042-f6c4-4d9c-90a9-ae5305c7951e-kube-api-access-rzghx\") pod \"nova-api-d050-account-create-update-4w8nt\" (UID: \"265af042-f6c4-4d9c-90a9-ae5305c7951e\") " pod="openstack/nova-api-d050-account-create-update-4w8nt" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.457441 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-operator-scripts\") pod \"nova-cell0-db-create-slhnv\" (UID: \"bdd856e6-ba47-4660-b0b1-7e6202b97bb5\") " pod="openstack/nova-cell0-db-create-slhnv" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.487885 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x45kr\" (UniqueName: \"kubernetes.io/projected/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-kube-api-access-x45kr\") pod \"nova-cell0-db-create-slhnv\" (UID: \"bdd856e6-ba47-4660-b0b1-7e6202b97bb5\") " pod="openstack/nova-cell0-db-create-slhnv" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.496207 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-bqvdg"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.497310 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bqvdg" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.507483 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bqvdg"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.508889 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2j7hx" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.561372 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dbf3254-d4fb-44d8-9f9e-637d9351d696-operator-scripts\") pod \"nova-cell1-db-create-bqvdg\" (UID: \"6dbf3254-d4fb-44d8-9f9e-637d9351d696\") " pod="openstack/nova-cell1-db-create-bqvdg" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.561706 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265af042-f6c4-4d9c-90a9-ae5305c7951e-operator-scripts\") pod \"nova-api-d050-account-create-update-4w8nt\" (UID: \"265af042-f6c4-4d9c-90a9-ae5305c7951e\") " pod="openstack/nova-api-d050-account-create-update-4w8nt" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.561966 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzghx\" (UniqueName: \"kubernetes.io/projected/265af042-f6c4-4d9c-90a9-ae5305c7951e-kube-api-access-rzghx\") pod \"nova-api-d050-account-create-update-4w8nt\" (UID: \"265af042-f6c4-4d9c-90a9-ae5305c7951e\") " pod="openstack/nova-api-d050-account-create-update-4w8nt" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.562048 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2dmg\" (UniqueName: \"kubernetes.io/projected/6dbf3254-d4fb-44d8-9f9e-637d9351d696-kube-api-access-q2dmg\") pod \"nova-cell1-db-create-bqvdg\" (UID: \"6dbf3254-d4fb-44d8-9f9e-637d9351d696\") " pod="openstack/nova-cell1-db-create-bqvdg" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.562975 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265af042-f6c4-4d9c-90a9-ae5305c7951e-operator-scripts\") pod \"nova-api-d050-account-create-update-4w8nt\" (UID: \"265af042-f6c4-4d9c-90a9-ae5305c7951e\") " pod="openstack/nova-api-d050-account-create-update-4w8nt" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.584478 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzghx\" (UniqueName: \"kubernetes.io/projected/265af042-f6c4-4d9c-90a9-ae5305c7951e-kube-api-access-rzghx\") pod \"nova-api-d050-account-create-update-4w8nt\" (UID: \"265af042-f6c4-4d9c-90a9-ae5305c7951e\") " pod="openstack/nova-api-d050-account-create-update-4w8nt" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.588861 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d050-account-create-update-4w8nt" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.591271 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-56bb-account-create-update-45wwp"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.592322 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-56bb-account-create-update-45wwp" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.593972 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.602261 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-slhnv" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.606626 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-56bb-account-create-update-45wwp"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.666434 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2g2m\" (UniqueName: \"kubernetes.io/projected/872972e7-f010-4500-923f-9d29cf00bb60-kube-api-access-p2g2m\") pod \"nova-cell0-56bb-account-create-update-45wwp\" (UID: \"872972e7-f010-4500-923f-9d29cf00bb60\") " pod="openstack/nova-cell0-56bb-account-create-update-45wwp" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.667484 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2dmg\" (UniqueName: \"kubernetes.io/projected/6dbf3254-d4fb-44d8-9f9e-637d9351d696-kube-api-access-q2dmg\") pod \"nova-cell1-db-create-bqvdg\" (UID: \"6dbf3254-d4fb-44d8-9f9e-637d9351d696\") " pod="openstack/nova-cell1-db-create-bqvdg" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.667584 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/872972e7-f010-4500-923f-9d29cf00bb60-operator-scripts\") pod \"nova-cell0-56bb-account-create-update-45wwp\" (UID: \"872972e7-f010-4500-923f-9d29cf00bb60\") " pod="openstack/nova-cell0-56bb-account-create-update-45wwp" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.667687 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dbf3254-d4fb-44d8-9f9e-637d9351d696-operator-scripts\") pod \"nova-cell1-db-create-bqvdg\" (UID: \"6dbf3254-d4fb-44d8-9f9e-637d9351d696\") " pod="openstack/nova-cell1-db-create-bqvdg" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.668790 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dbf3254-d4fb-44d8-9f9e-637d9351d696-operator-scripts\") pod \"nova-cell1-db-create-bqvdg\" (UID: \"6dbf3254-d4fb-44d8-9f9e-637d9351d696\") " pod="openstack/nova-cell1-db-create-bqvdg" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.690027 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2dmg\" (UniqueName: \"kubernetes.io/projected/6dbf3254-d4fb-44d8-9f9e-637d9351d696-kube-api-access-q2dmg\") pod \"nova-cell1-db-create-bqvdg\" (UID: \"6dbf3254-d4fb-44d8-9f9e-637d9351d696\") " pod="openstack/nova-cell1-db-create-bqvdg" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.770605 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/872972e7-f010-4500-923f-9d29cf00bb60-operator-scripts\") pod \"nova-cell0-56bb-account-create-update-45wwp\" (UID: \"872972e7-f010-4500-923f-9d29cf00bb60\") " pod="openstack/nova-cell0-56bb-account-create-update-45wwp" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.770710 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2g2m\" (UniqueName: \"kubernetes.io/projected/872972e7-f010-4500-923f-9d29cf00bb60-kube-api-access-p2g2m\") pod \"nova-cell0-56bb-account-create-update-45wwp\" (UID: \"872972e7-f010-4500-923f-9d29cf00bb60\") " pod="openstack/nova-cell0-56bb-account-create-update-45wwp" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.771660 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/872972e7-f010-4500-923f-9d29cf00bb60-operator-scripts\") pod \"nova-cell0-56bb-account-create-update-45wwp\" (UID: \"872972e7-f010-4500-923f-9d29cf00bb60\") " pod="openstack/nova-cell0-56bb-account-create-update-45wwp" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.792365 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7774-account-create-update-dcgfl"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.793616 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7774-account-create-update-dcgfl" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.793954 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2g2m\" (UniqueName: \"kubernetes.io/projected/872972e7-f010-4500-923f-9d29cf00bb60-kube-api-access-p2g2m\") pod \"nova-cell0-56bb-account-create-update-45wwp\" (UID: \"872972e7-f010-4500-923f-9d29cf00bb60\") " pod="openstack/nova-cell0-56bb-account-create-update-45wwp" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.798171 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.806249 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7774-account-create-update-dcgfl"] Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.872398 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l26dc\" (UniqueName: \"kubernetes.io/projected/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-kube-api-access-l26dc\") pod \"nova-cell1-7774-account-create-update-dcgfl\" (UID: \"f3bce0ce-1de6-41b9-b947-d6deb44c40a7\") " pod="openstack/nova-cell1-7774-account-create-update-dcgfl" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.872527 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-operator-scripts\") pod \"nova-cell1-7774-account-create-update-dcgfl\" (UID: \"f3bce0ce-1de6-41b9-b947-d6deb44c40a7\") " pod="openstack/nova-cell1-7774-account-create-update-dcgfl" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.923247 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bqvdg" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.937592 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-56bb-account-create-update-45wwp" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.974636 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l26dc\" (UniqueName: \"kubernetes.io/projected/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-kube-api-access-l26dc\") pod \"nova-cell1-7774-account-create-update-dcgfl\" (UID: \"f3bce0ce-1de6-41b9-b947-d6deb44c40a7\") " pod="openstack/nova-cell1-7774-account-create-update-dcgfl" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.974736 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-operator-scripts\") pod \"nova-cell1-7774-account-create-update-dcgfl\" (UID: \"f3bce0ce-1de6-41b9-b947-d6deb44c40a7\") " pod="openstack/nova-cell1-7774-account-create-update-dcgfl" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.975611 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-operator-scripts\") pod \"nova-cell1-7774-account-create-update-dcgfl\" (UID: \"f3bce0ce-1de6-41b9-b947-d6deb44c40a7\") " pod="openstack/nova-cell1-7774-account-create-update-dcgfl" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.990465 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l26dc\" (UniqueName: \"kubernetes.io/projected/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-kube-api-access-l26dc\") pod \"nova-cell1-7774-account-create-update-dcgfl\" (UID: \"f3bce0ce-1de6-41b9-b947-d6deb44c40a7\") " pod="openstack/nova-cell1-7774-account-create-update-dcgfl" Feb 19 13:06:32 crc kubenswrapper[4833]: I0219 13:06:32.992030 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2j7hx"] Feb 19 13:06:33 crc kubenswrapper[4833]: I0219 13:06:33.007259 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87138863-564a-46e7-80b0-eef5a19d7977","Type":"ContainerStarted","Data":"dcf54e4cdc78b40e62c3ae3515d072a6e94994455313f28cd6a6019a72bd8d7a"} Feb 19 13:06:33 crc kubenswrapper[4833]: I0219 13:06:33.007325 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87138863-564a-46e7-80b0-eef5a19d7977","Type":"ContainerStarted","Data":"bd6851d2ab750cda050f2847a87de5ae3627186835976cb8c5cbe19d98e15e21"} Feb 19 13:06:33 crc kubenswrapper[4833]: W0219 13:06:33.013712 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4d96078_4bfc_49d9_b70b_1be6d9b29558.slice/crio-378f8715cb3097f94aca827f76fa5c3b9516444a5d94c3c0e1a0bffee70b9101 WatchSource:0}: Error finding container 378f8715cb3097f94aca827f76fa5c3b9516444a5d94c3c0e1a0bffee70b9101: Status 404 returned error can't find the container with id 378f8715cb3097f94aca827f76fa5c3b9516444a5d94c3c0e1a0bffee70b9101 Feb 19 13:06:33 crc kubenswrapper[4833]: I0219 13:06:33.114345 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7774-account-create-update-dcgfl" Feb 19 13:06:33 crc kubenswrapper[4833]: W0219 13:06:33.160532 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdd856e6_ba47_4660_b0b1_7e6202b97bb5.slice/crio-e4f6f4b37293ece634a1dddf61095f7f0e22a531cb12eab1e326c761d661a62e WatchSource:0}: Error finding container e4f6f4b37293ece634a1dddf61095f7f0e22a531cb12eab1e326c761d661a62e: Status 404 returned error can't find the container with id e4f6f4b37293ece634a1dddf61095f7f0e22a531cb12eab1e326c761d661a62e Feb 19 13:06:33 crc kubenswrapper[4833]: I0219 13:06:33.162090 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-slhnv"] Feb 19 13:06:33 crc kubenswrapper[4833]: W0219 13:06:33.167911 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod265af042_f6c4_4d9c_90a9_ae5305c7951e.slice/crio-68662925134e45e2a4e859be1f0278bcfce93a802ff4d696ed23e22bf6e0dc3b WatchSource:0}: Error finding container 68662925134e45e2a4e859be1f0278bcfce93a802ff4d696ed23e22bf6e0dc3b: Status 404 returned error can't find the container with id 68662925134e45e2a4e859be1f0278bcfce93a802ff4d696ed23e22bf6e0dc3b Feb 19 13:06:33 crc kubenswrapper[4833]: I0219 13:06:33.188015 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d050-account-create-update-4w8nt"] Feb 19 13:06:33 crc kubenswrapper[4833]: I0219 13:06:33.488890 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-56bb-account-create-update-45wwp"] Feb 19 13:06:33 crc kubenswrapper[4833]: I0219 13:06:33.671970 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-bqvdg"] Feb 19 13:06:33 crc kubenswrapper[4833]: W0219 13:06:33.681668 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6dbf3254_d4fb_44d8_9f9e_637d9351d696.slice/crio-2eef1923f7bd2b514e89e6096ac54b1bb0b702a9a65edba4a4b961ce17f5716c WatchSource:0}: Error finding container 2eef1923f7bd2b514e89e6096ac54b1bb0b702a9a65edba4a4b961ce17f5716c: Status 404 returned error can't find the container with id 2eef1923f7bd2b514e89e6096ac54b1bb0b702a9a65edba4a4b961ce17f5716c Feb 19 13:06:33 crc kubenswrapper[4833]: W0219 13:06:33.689246 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3bce0ce_1de6_41b9_b947_d6deb44c40a7.slice/crio-46838350b336da62dc1e3fb519bf2084481de50ef63ffd58c1895ff11ccb941f WatchSource:0}: Error finding container 46838350b336da62dc1e3fb519bf2084481de50ef63ffd58c1895ff11ccb941f: Status 404 returned error can't find the container with id 46838350b336da62dc1e3fb519bf2084481de50ef63ffd58c1895ff11ccb941f Feb 19 13:06:33 crc kubenswrapper[4833]: I0219 13:06:33.689823 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7774-account-create-update-dcgfl"] Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.023954 4833 generic.go:334] "Generic (PLEG): container finished" podID="bdd856e6-ba47-4660-b0b1-7e6202b97bb5" containerID="2ef9069ddf43a8ca727bcd59b0fcb6abf87ee4beb694fe4a4aefa6188f172871" exitCode=0 Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.024038 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-slhnv" event={"ID":"bdd856e6-ba47-4660-b0b1-7e6202b97bb5","Type":"ContainerDied","Data":"2ef9069ddf43a8ca727bcd59b0fcb6abf87ee4beb694fe4a4aefa6188f172871"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.024359 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-slhnv" event={"ID":"bdd856e6-ba47-4660-b0b1-7e6202b97bb5","Type":"ContainerStarted","Data":"e4f6f4b37293ece634a1dddf61095f7f0e22a531cb12eab1e326c761d661a62e"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.033415 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7774-account-create-update-dcgfl" event={"ID":"f3bce0ce-1de6-41b9-b947-d6deb44c40a7","Type":"ContainerStarted","Data":"4a9c1d875d5757ee368a993a2ad717f9541c2f1fd000dab4586d7f4ae124c82e"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.033469 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7774-account-create-update-dcgfl" event={"ID":"f3bce0ce-1de6-41b9-b947-d6deb44c40a7","Type":"ContainerStarted","Data":"46838350b336da62dc1e3fb519bf2084481de50ef63ffd58c1895ff11ccb941f"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.036198 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bqvdg" event={"ID":"6dbf3254-d4fb-44d8-9f9e-637d9351d696","Type":"ContainerStarted","Data":"2eef1923f7bd2b514e89e6096ac54b1bb0b702a9a65edba4a4b961ce17f5716c"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.038195 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-56bb-account-create-update-45wwp" event={"ID":"872972e7-f010-4500-923f-9d29cf00bb60","Type":"ContainerStarted","Data":"b3387933afa0029cb71e5bfcbac3c59474e858313143df2f87dd6c8c275ef6e8"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.038219 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-56bb-account-create-update-45wwp" event={"ID":"872972e7-f010-4500-923f-9d29cf00bb60","Type":"ContainerStarted","Data":"6222cc50563c5c50eaff4d368d966f5002a3d6420f678e09d5bf5b25a9df661d"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.055420 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-56bb-account-create-update-45wwp" podStartSLOduration=2.055398517 podStartE2EDuration="2.055398517s" podCreationTimestamp="2026-02-19 13:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:06:34.051568254 +0000 UTC m=+1204.447087032" watchObservedRunningTime="2026-02-19 13:06:34.055398517 +0000 UTC m=+1204.450917285" Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.056614 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87138863-564a-46e7-80b0-eef5a19d7977","Type":"ContainerStarted","Data":"1456a8661612e3fa4a5e6ab44301c2445c0b0d1bcedfc37c4f5a3a7ac3372e30"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.069908 4833 generic.go:334] "Generic (PLEG): container finished" podID="f4d96078-4bfc-49d9-b70b-1be6d9b29558" containerID="58856ade3678574f8e42f2010360981657983c98bdc8707d02e40263e1a9acf6" exitCode=0 Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.070047 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2j7hx" event={"ID":"f4d96078-4bfc-49d9-b70b-1be6d9b29558","Type":"ContainerDied","Data":"58856ade3678574f8e42f2010360981657983c98bdc8707d02e40263e1a9acf6"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.070081 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2j7hx" event={"ID":"f4d96078-4bfc-49d9-b70b-1be6d9b29558","Type":"ContainerStarted","Data":"378f8715cb3097f94aca827f76fa5c3b9516444a5d94c3c0e1a0bffee70b9101"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.076538 4833 generic.go:334] "Generic (PLEG): container finished" podID="265af042-f6c4-4d9c-90a9-ae5305c7951e" containerID="307630457cf72a3fd20dd4332c9dbd2eb622a546daacc4dd70f9716d4e4eb781" exitCode=0 Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.076594 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d050-account-create-update-4w8nt" event={"ID":"265af042-f6c4-4d9c-90a9-ae5305c7951e","Type":"ContainerDied","Data":"307630457cf72a3fd20dd4332c9dbd2eb622a546daacc4dd70f9716d4e4eb781"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.076627 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d050-account-create-update-4w8nt" event={"ID":"265af042-f6c4-4d9c-90a9-ae5305c7951e","Type":"ContainerStarted","Data":"68662925134e45e2a4e859be1f0278bcfce93a802ff4d696ed23e22bf6e0dc3b"} Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.089162 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-bqvdg" podStartSLOduration=2.089138714 podStartE2EDuration="2.089138714s" podCreationTimestamp="2026-02-19 13:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:06:34.072794574 +0000 UTC m=+1204.468313342" watchObservedRunningTime="2026-02-19 13:06:34.089138714 +0000 UTC m=+1204.484657482" Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.117796 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-7774-account-create-update-dcgfl" podStartSLOduration=2.117778393 podStartE2EDuration="2.117778393s" podCreationTimestamp="2026-02-19 13:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:06:34.087216522 +0000 UTC m=+1204.482735290" watchObservedRunningTime="2026-02-19 13:06:34.117778393 +0000 UTC m=+1204.513297161" Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.342862 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.342900 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.381231 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:34 crc kubenswrapper[4833]: I0219 13:06:34.416058 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.085284 4833 generic.go:334] "Generic (PLEG): container finished" podID="f3bce0ce-1de6-41b9-b947-d6deb44c40a7" containerID="4a9c1d875d5757ee368a993a2ad717f9541c2f1fd000dab4586d7f4ae124c82e" exitCode=0 Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.085344 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7774-account-create-update-dcgfl" event={"ID":"f3bce0ce-1de6-41b9-b947-d6deb44c40a7","Type":"ContainerDied","Data":"4a9c1d875d5757ee368a993a2ad717f9541c2f1fd000dab4586d7f4ae124c82e"} Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.087546 4833 generic.go:334] "Generic (PLEG): container finished" podID="6dbf3254-d4fb-44d8-9f9e-637d9351d696" containerID="69f25c28e08dfd5dda237d356bd4c2e14220158d36acc0d06b0ad42a7909c43a" exitCode=0 Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.087589 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bqvdg" event={"ID":"6dbf3254-d4fb-44d8-9f9e-637d9351d696","Type":"ContainerDied","Data":"69f25c28e08dfd5dda237d356bd4c2e14220158d36acc0d06b0ad42a7909c43a"} Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.091465 4833 generic.go:334] "Generic (PLEG): container finished" podID="872972e7-f010-4500-923f-9d29cf00bb60" containerID="b3387933afa0029cb71e5bfcbac3c59474e858313143df2f87dd6c8c275ef6e8" exitCode=0 Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.091552 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-56bb-account-create-update-45wwp" event={"ID":"872972e7-f010-4500-923f-9d29cf00bb60","Type":"ContainerDied","Data":"b3387933afa0029cb71e5bfcbac3c59474e858313143df2f87dd6c8c275ef6e8"} Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.094789 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87138863-564a-46e7-80b0-eef5a19d7977","Type":"ContainerStarted","Data":"f334ac772d67a4815bfbfc4911bb5aca62ed2ca22349f15088bea6f7c7dd2fd8"} Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.095100 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.095147 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.593942 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d050-account-create-update-4w8nt" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.676570 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265af042-f6c4-4d9c-90a9-ae5305c7951e-operator-scripts\") pod \"265af042-f6c4-4d9c-90a9-ae5305c7951e\" (UID: \"265af042-f6c4-4d9c-90a9-ae5305c7951e\") " Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.676660 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzghx\" (UniqueName: \"kubernetes.io/projected/265af042-f6c4-4d9c-90a9-ae5305c7951e-kube-api-access-rzghx\") pod \"265af042-f6c4-4d9c-90a9-ae5305c7951e\" (UID: \"265af042-f6c4-4d9c-90a9-ae5305c7951e\") " Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.677285 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/265af042-f6c4-4d9c-90a9-ae5305c7951e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "265af042-f6c4-4d9c-90a9-ae5305c7951e" (UID: "265af042-f6c4-4d9c-90a9-ae5305c7951e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.683919 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265af042-f6c4-4d9c-90a9-ae5305c7951e-kube-api-access-rzghx" (OuterVolumeSpecName: "kube-api-access-rzghx") pod "265af042-f6c4-4d9c-90a9-ae5305c7951e" (UID: "265af042-f6c4-4d9c-90a9-ae5305c7951e"). InnerVolumeSpecName "kube-api-access-rzghx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.686272 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2j7hx" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.724469 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-slhnv" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.778521 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x45kr\" (UniqueName: \"kubernetes.io/projected/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-kube-api-access-x45kr\") pod \"bdd856e6-ba47-4660-b0b1-7e6202b97bb5\" (UID: \"bdd856e6-ba47-4660-b0b1-7e6202b97bb5\") " Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.778574 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4d96078-4bfc-49d9-b70b-1be6d9b29558-operator-scripts\") pod \"f4d96078-4bfc-49d9-b70b-1be6d9b29558\" (UID: \"f4d96078-4bfc-49d9-b70b-1be6d9b29558\") " Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.778619 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9h8v\" (UniqueName: \"kubernetes.io/projected/f4d96078-4bfc-49d9-b70b-1be6d9b29558-kube-api-access-k9h8v\") pod \"f4d96078-4bfc-49d9-b70b-1be6d9b29558\" (UID: \"f4d96078-4bfc-49d9-b70b-1be6d9b29558\") " Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.778743 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-operator-scripts\") pod \"bdd856e6-ba47-4660-b0b1-7e6202b97bb5\" (UID: \"bdd856e6-ba47-4660-b0b1-7e6202b97bb5\") " Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.779432 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4d96078-4bfc-49d9-b70b-1be6d9b29558-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4d96078-4bfc-49d9-b70b-1be6d9b29558" (UID: "f4d96078-4bfc-49d9-b70b-1be6d9b29558"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.779510 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdd856e6-ba47-4660-b0b1-7e6202b97bb5" (UID: "bdd856e6-ba47-4660-b0b1-7e6202b97bb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.779701 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/265af042-f6c4-4d9c-90a9-ae5305c7951e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.779755 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzghx\" (UniqueName: \"kubernetes.io/projected/265af042-f6c4-4d9c-90a9-ae5305c7951e-kube-api-access-rzghx\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.779801 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4d96078-4bfc-49d9-b70b-1be6d9b29558-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.782269 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4d96078-4bfc-49d9-b70b-1be6d9b29558-kube-api-access-k9h8v" (OuterVolumeSpecName: "kube-api-access-k9h8v") pod "f4d96078-4bfc-49d9-b70b-1be6d9b29558" (UID: "f4d96078-4bfc-49d9-b70b-1be6d9b29558"). InnerVolumeSpecName "kube-api-access-k9h8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.782654 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-kube-api-access-x45kr" (OuterVolumeSpecName: "kube-api-access-x45kr") pod "bdd856e6-ba47-4660-b0b1-7e6202b97bb5" (UID: "bdd856e6-ba47-4660-b0b1-7e6202b97bb5"). InnerVolumeSpecName "kube-api-access-x45kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.881364 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x45kr\" (UniqueName: \"kubernetes.io/projected/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-kube-api-access-x45kr\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.881391 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9h8v\" (UniqueName: \"kubernetes.io/projected/f4d96078-4bfc-49d9-b70b-1be6d9b29558-kube-api-access-k9h8v\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:35 crc kubenswrapper[4833]: I0219 13:06:35.881401 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdd856e6-ba47-4660-b0b1-7e6202b97bb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.104971 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d050-account-create-update-4w8nt" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.104972 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d050-account-create-update-4w8nt" event={"ID":"265af042-f6c4-4d9c-90a9-ae5305c7951e","Type":"ContainerDied","Data":"68662925134e45e2a4e859be1f0278bcfce93a802ff4d696ed23e22bf6e0dc3b"} Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.105102 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68662925134e45e2a4e859be1f0278bcfce93a802ff4d696ed23e22bf6e0dc3b" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.107329 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-slhnv" event={"ID":"bdd856e6-ba47-4660-b0b1-7e6202b97bb5","Type":"ContainerDied","Data":"e4f6f4b37293ece634a1dddf61095f7f0e22a531cb12eab1e326c761d661a62e"} Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.107363 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4f6f4b37293ece634a1dddf61095f7f0e22a531cb12eab1e326c761d661a62e" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.107365 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-slhnv" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.109692 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87138863-564a-46e7-80b0-eef5a19d7977","Type":"ContainerStarted","Data":"0d73f3ffe69a520b27305f6cce65bedcd598f033c276f9c1caf76a9e06e67868"} Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.109828 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.111262 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2j7hx" event={"ID":"f4d96078-4bfc-49d9-b70b-1be6d9b29558","Type":"ContainerDied","Data":"378f8715cb3097f94aca827f76fa5c3b9516444a5d94c3c0e1a0bffee70b9101"} Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.111294 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="378f8715cb3097f94aca827f76fa5c3b9516444a5d94c3c0e1a0bffee70b9101" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.111388 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2j7hx" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.355205 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7774-account-create-update-dcgfl" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.375455 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5332094120000002 podStartE2EDuration="5.375438829s" podCreationTimestamp="2026-02-19 13:06:31 +0000 UTC" firstStartedPulling="2026-02-19 13:06:32.060172337 +0000 UTC m=+1202.455691105" lastFinishedPulling="2026-02-19 13:06:35.902401754 +0000 UTC m=+1206.297920522" observedRunningTime="2026-02-19 13:06:36.139055575 +0000 UTC m=+1206.534574353" watchObservedRunningTime="2026-02-19 13:06:36.375438829 +0000 UTC m=+1206.770957597" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.390798 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l26dc\" (UniqueName: \"kubernetes.io/projected/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-kube-api-access-l26dc\") pod \"f3bce0ce-1de6-41b9-b947-d6deb44c40a7\" (UID: \"f3bce0ce-1de6-41b9-b947-d6deb44c40a7\") " Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.390885 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-operator-scripts\") pod \"f3bce0ce-1de6-41b9-b947-d6deb44c40a7\" (UID: \"f3bce0ce-1de6-41b9-b947-d6deb44c40a7\") " Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.391772 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3bce0ce-1de6-41b9-b947-d6deb44c40a7" (UID: "f3bce0ce-1de6-41b9-b947-d6deb44c40a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.394754 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-kube-api-access-l26dc" (OuterVolumeSpecName: "kube-api-access-l26dc") pod "f3bce0ce-1de6-41b9-b947-d6deb44c40a7" (UID: "f3bce0ce-1de6-41b9-b947-d6deb44c40a7"). InnerVolumeSpecName "kube-api-access-l26dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.493082 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l26dc\" (UniqueName: \"kubernetes.io/projected/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-kube-api-access-l26dc\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.493148 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bce0ce-1de6-41b9-b947-d6deb44c40a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.553486 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-56bb-account-create-update-45wwp" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.594710 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/872972e7-f010-4500-923f-9d29cf00bb60-operator-scripts\") pod \"872972e7-f010-4500-923f-9d29cf00bb60\" (UID: \"872972e7-f010-4500-923f-9d29cf00bb60\") " Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.594885 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2g2m\" (UniqueName: \"kubernetes.io/projected/872972e7-f010-4500-923f-9d29cf00bb60-kube-api-access-p2g2m\") pod \"872972e7-f010-4500-923f-9d29cf00bb60\" (UID: \"872972e7-f010-4500-923f-9d29cf00bb60\") " Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.595326 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/872972e7-f010-4500-923f-9d29cf00bb60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "872972e7-f010-4500-923f-9d29cf00bb60" (UID: "872972e7-f010-4500-923f-9d29cf00bb60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.595638 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/872972e7-f010-4500-923f-9d29cf00bb60-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.602255 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872972e7-f010-4500-923f-9d29cf00bb60-kube-api-access-p2g2m" (OuterVolumeSpecName: "kube-api-access-p2g2m") pod "872972e7-f010-4500-923f-9d29cf00bb60" (UID: "872972e7-f010-4500-923f-9d29cf00bb60"). InnerVolumeSpecName "kube-api-access-p2g2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.661915 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bqvdg" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.696718 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2dmg\" (UniqueName: \"kubernetes.io/projected/6dbf3254-d4fb-44d8-9f9e-637d9351d696-kube-api-access-q2dmg\") pod \"6dbf3254-d4fb-44d8-9f9e-637d9351d696\" (UID: \"6dbf3254-d4fb-44d8-9f9e-637d9351d696\") " Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.696777 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dbf3254-d4fb-44d8-9f9e-637d9351d696-operator-scripts\") pod \"6dbf3254-d4fb-44d8-9f9e-637d9351d696\" (UID: \"6dbf3254-d4fb-44d8-9f9e-637d9351d696\") " Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.697254 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6dbf3254-d4fb-44d8-9f9e-637d9351d696-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6dbf3254-d4fb-44d8-9f9e-637d9351d696" (UID: "6dbf3254-d4fb-44d8-9f9e-637d9351d696"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.698136 4833 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6dbf3254-d4fb-44d8-9f9e-637d9351d696-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.698159 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2g2m\" (UniqueName: \"kubernetes.io/projected/872972e7-f010-4500-923f-9d29cf00bb60-kube-api-access-p2g2m\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.700125 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dbf3254-d4fb-44d8-9f9e-637d9351d696-kube-api-access-q2dmg" (OuterVolumeSpecName: "kube-api-access-q2dmg") pod "6dbf3254-d4fb-44d8-9f9e-637d9351d696" (UID: "6dbf3254-d4fb-44d8-9f9e-637d9351d696"). InnerVolumeSpecName "kube-api-access-q2dmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.800204 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2dmg\" (UniqueName: \"kubernetes.io/projected/6dbf3254-d4fb-44d8-9f9e-637d9351d696-kube-api-access-q2dmg\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.987987 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:36 crc kubenswrapper[4833]: I0219 13:06:36.991987 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.146798 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7774-account-create-update-dcgfl" event={"ID":"f3bce0ce-1de6-41b9-b947-d6deb44c40a7","Type":"ContainerDied","Data":"46838350b336da62dc1e3fb519bf2084481de50ef63ffd58c1895ff11ccb941f"} Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.146843 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46838350b336da62dc1e3fb519bf2084481de50ef63ffd58c1895ff11ccb941f" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.146945 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7774-account-create-update-dcgfl" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.150563 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-bqvdg" event={"ID":"6dbf3254-d4fb-44d8-9f9e-637d9351d696","Type":"ContainerDied","Data":"2eef1923f7bd2b514e89e6096ac54b1bb0b702a9a65edba4a4b961ce17f5716c"} Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.150604 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eef1923f7bd2b514e89e6096ac54b1bb0b702a9a65edba4a4b961ce17f5716c" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.150696 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-bqvdg" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.153372 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-56bb-account-create-update-45wwp" event={"ID":"872972e7-f010-4500-923f-9d29cf00bb60","Type":"ContainerDied","Data":"6222cc50563c5c50eaff4d368d966f5002a3d6420f678e09d5bf5b25a9df661d"} Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.153429 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6222cc50563c5c50eaff4d368d966f5002a3d6420f678e09d5bf5b25a9df661d" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.153708 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-56bb-account-create-update-45wwp" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.838979 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kws6n"] Feb 19 13:06:37 crc kubenswrapper[4833]: E0219 13:06:37.839869 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265af042-f6c4-4d9c-90a9-ae5305c7951e" containerName="mariadb-account-create-update" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.839997 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="265af042-f6c4-4d9c-90a9-ae5305c7951e" containerName="mariadb-account-create-update" Feb 19 13:06:37 crc kubenswrapper[4833]: E0219 13:06:37.840085 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdd856e6-ba47-4660-b0b1-7e6202b97bb5" containerName="mariadb-database-create" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.840150 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdd856e6-ba47-4660-b0b1-7e6202b97bb5" containerName="mariadb-database-create" Feb 19 13:06:37 crc kubenswrapper[4833]: E0219 13:06:37.840235 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4d96078-4bfc-49d9-b70b-1be6d9b29558" containerName="mariadb-database-create" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.840301 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4d96078-4bfc-49d9-b70b-1be6d9b29558" containerName="mariadb-database-create" Feb 19 13:06:37 crc kubenswrapper[4833]: E0219 13:06:37.840387 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dbf3254-d4fb-44d8-9f9e-637d9351d696" containerName="mariadb-database-create" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.840465 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dbf3254-d4fb-44d8-9f9e-637d9351d696" containerName="mariadb-database-create" Feb 19 13:06:37 crc kubenswrapper[4833]: E0219 13:06:37.840559 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872972e7-f010-4500-923f-9d29cf00bb60" containerName="mariadb-account-create-update" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.840632 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="872972e7-f010-4500-923f-9d29cf00bb60" containerName="mariadb-account-create-update" Feb 19 13:06:37 crc kubenswrapper[4833]: E0219 13:06:37.840713 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3bce0ce-1de6-41b9-b947-d6deb44c40a7" containerName="mariadb-account-create-update" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.840781 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3bce0ce-1de6-41b9-b947-d6deb44c40a7" containerName="mariadb-account-create-update" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.841062 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdd856e6-ba47-4660-b0b1-7e6202b97bb5" containerName="mariadb-database-create" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.841162 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dbf3254-d4fb-44d8-9f9e-637d9351d696" containerName="mariadb-database-create" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.841249 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4d96078-4bfc-49d9-b70b-1be6d9b29558" containerName="mariadb-database-create" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.841332 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="265af042-f6c4-4d9c-90a9-ae5305c7951e" containerName="mariadb-account-create-update" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.841413 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3bce0ce-1de6-41b9-b947-d6deb44c40a7" containerName="mariadb-account-create-update" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.841517 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="872972e7-f010-4500-923f-9d29cf00bb60" containerName="mariadb-account-create-update" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.842307 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.845217 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.845876 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-42ml7" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.845951 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.848123 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kws6n"] Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.923238 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-scripts\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.923386 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.923465 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlwrg\" (UniqueName: \"kubernetes.io/projected/4708f947-ca39-46bf-b1f2-35d28d6dc573-kube-api-access-jlwrg\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:37 crc kubenswrapper[4833]: I0219 13:06:37.923485 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-config-data\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:38 crc kubenswrapper[4833]: I0219 13:06:38.025774 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:38 crc kubenswrapper[4833]: I0219 13:06:38.026068 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlwrg\" (UniqueName: \"kubernetes.io/projected/4708f947-ca39-46bf-b1f2-35d28d6dc573-kube-api-access-jlwrg\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:38 crc kubenswrapper[4833]: I0219 13:06:38.026192 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-config-data\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:38 crc kubenswrapper[4833]: I0219 13:06:38.026314 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-scripts\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:38 crc kubenswrapper[4833]: I0219 13:06:38.031086 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:38 crc kubenswrapper[4833]: I0219 13:06:38.033044 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-config-data\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:38 crc kubenswrapper[4833]: I0219 13:06:38.038054 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-scripts\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:38 crc kubenswrapper[4833]: I0219 13:06:38.055817 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlwrg\" (UniqueName: \"kubernetes.io/projected/4708f947-ca39-46bf-b1f2-35d28d6dc573-kube-api-access-jlwrg\") pod \"nova-cell0-conductor-db-sync-kws6n\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:38 crc kubenswrapper[4833]: I0219 13:06:38.159027 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:38 crc kubenswrapper[4833]: I0219 13:06:38.646826 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kws6n"] Feb 19 13:06:39 crc kubenswrapper[4833]: I0219 13:06:39.177749 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kws6n" event={"ID":"4708f947-ca39-46bf-b1f2-35d28d6dc573","Type":"ContainerStarted","Data":"3d8f623d94f86bdf22071eab810fa7907d3607fcdc20d8ca72ee2f61208c18d2"} Feb 19 13:06:39 crc kubenswrapper[4833]: I0219 13:06:39.611758 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 13:06:39 crc kubenswrapper[4833]: I0219 13:06:39.611809 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 19 13:06:39 crc kubenswrapper[4833]: I0219 13:06:39.642060 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 13:06:39 crc kubenswrapper[4833]: I0219 13:06:39.667777 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 19 13:06:40 crc kubenswrapper[4833]: I0219 13:06:40.189363 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 13:06:40 crc kubenswrapper[4833]: I0219 13:06:40.189397 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 19 13:06:41 crc kubenswrapper[4833]: I0219 13:06:41.982243 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 13:06:42 crc kubenswrapper[4833]: I0219 13:06:42.097546 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 19 13:06:45 crc kubenswrapper[4833]: I0219 13:06:45.744473 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:06:45 crc kubenswrapper[4833]: I0219 13:06:45.744846 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:06:45 crc kubenswrapper[4833]: I0219 13:06:45.744887 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 13:06:45 crc kubenswrapper[4833]: I0219 13:06:45.745703 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79901fa015c98a89f8eb5d748d58a779eb4aed74d086040cca560575f94233a9"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:06:45 crc kubenswrapper[4833]: I0219 13:06:45.745768 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://79901fa015c98a89f8eb5d748d58a779eb4aed74d086040cca560575f94233a9" gracePeriod=600 Feb 19 13:06:46 crc kubenswrapper[4833]: I0219 13:06:46.263241 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="79901fa015c98a89f8eb5d748d58a779eb4aed74d086040cca560575f94233a9" exitCode=0 Feb 19 13:06:46 crc kubenswrapper[4833]: I0219 13:06:46.263348 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"79901fa015c98a89f8eb5d748d58a779eb4aed74d086040cca560575f94233a9"} Feb 19 13:06:46 crc kubenswrapper[4833]: I0219 13:06:46.263638 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"44cd4d92890c7506a1edce4407a60145e4dd4d2e3ac145ff2d3b775c7a0f6b00"} Feb 19 13:06:46 crc kubenswrapper[4833]: I0219 13:06:46.263676 4833 scope.go:117] "RemoveContainer" containerID="44979c86cbc1a1a08268bf3eace13600a4809b3fa1a8321a545736d1f5619e6f" Feb 19 13:06:46 crc kubenswrapper[4833]: I0219 13:06:46.271296 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kws6n" event={"ID":"4708f947-ca39-46bf-b1f2-35d28d6dc573","Type":"ContainerStarted","Data":"7570dfe81763a724dabe497998a8d726e3cc8ebf05bd8a8446f6cebee08c4bf3"} Feb 19 13:06:46 crc kubenswrapper[4833]: I0219 13:06:46.315342 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-kws6n" podStartSLOduration=2.57232889 podStartE2EDuration="9.315326218s" podCreationTimestamp="2026-02-19 13:06:37 +0000 UTC" firstStartedPulling="2026-02-19 13:06:38.643324948 +0000 UTC m=+1209.038843716" lastFinishedPulling="2026-02-19 13:06:45.386322266 +0000 UTC m=+1215.781841044" observedRunningTime="2026-02-19 13:06:46.312090611 +0000 UTC m=+1216.707609419" watchObservedRunningTime="2026-02-19 13:06:46.315326218 +0000 UTC m=+1216.710845006" Feb 19 13:06:56 crc kubenswrapper[4833]: I0219 13:06:56.403687 4833 generic.go:334] "Generic (PLEG): container finished" podID="4708f947-ca39-46bf-b1f2-35d28d6dc573" containerID="7570dfe81763a724dabe497998a8d726e3cc8ebf05bd8a8446f6cebee08c4bf3" exitCode=0 Feb 19 13:06:56 crc kubenswrapper[4833]: I0219 13:06:56.403790 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kws6n" event={"ID":"4708f947-ca39-46bf-b1f2-35d28d6dc573","Type":"ContainerDied","Data":"7570dfe81763a724dabe497998a8d726e3cc8ebf05bd8a8446f6cebee08c4bf3"} Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.800543 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.884974 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-scripts\") pod \"4708f947-ca39-46bf-b1f2-35d28d6dc573\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.886678 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-config-data\") pod \"4708f947-ca39-46bf-b1f2-35d28d6dc573\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.886923 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlwrg\" (UniqueName: \"kubernetes.io/projected/4708f947-ca39-46bf-b1f2-35d28d6dc573-kube-api-access-jlwrg\") pod \"4708f947-ca39-46bf-b1f2-35d28d6dc573\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.887157 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-combined-ca-bundle\") pod \"4708f947-ca39-46bf-b1f2-35d28d6dc573\" (UID: \"4708f947-ca39-46bf-b1f2-35d28d6dc573\") " Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.896473 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-scripts" (OuterVolumeSpecName: "scripts") pod "4708f947-ca39-46bf-b1f2-35d28d6dc573" (UID: "4708f947-ca39-46bf-b1f2-35d28d6dc573"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.900543 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4708f947-ca39-46bf-b1f2-35d28d6dc573-kube-api-access-jlwrg" (OuterVolumeSpecName: "kube-api-access-jlwrg") pod "4708f947-ca39-46bf-b1f2-35d28d6dc573" (UID: "4708f947-ca39-46bf-b1f2-35d28d6dc573"). InnerVolumeSpecName "kube-api-access-jlwrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.921428 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-config-data" (OuterVolumeSpecName: "config-data") pod "4708f947-ca39-46bf-b1f2-35d28d6dc573" (UID: "4708f947-ca39-46bf-b1f2-35d28d6dc573"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.922944 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4708f947-ca39-46bf-b1f2-35d28d6dc573" (UID: "4708f947-ca39-46bf-b1f2-35d28d6dc573"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.990266 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.990529 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.990546 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4708f947-ca39-46bf-b1f2-35d28d6dc573-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:57 crc kubenswrapper[4833]: I0219 13:06:57.990555 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlwrg\" (UniqueName: \"kubernetes.io/projected/4708f947-ca39-46bf-b1f2-35d28d6dc573-kube-api-access-jlwrg\") on node \"crc\" DevicePath \"\"" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.431639 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-kws6n" event={"ID":"4708f947-ca39-46bf-b1f2-35d28d6dc573","Type":"ContainerDied","Data":"3d8f623d94f86bdf22071eab810fa7907d3607fcdc20d8ca72ee2f61208c18d2"} Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.431688 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d8f623d94f86bdf22071eab810fa7907d3607fcdc20d8ca72ee2f61208c18d2" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.431762 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-kws6n" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.594524 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 13:06:58 crc kubenswrapper[4833]: E0219 13:06:58.594932 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4708f947-ca39-46bf-b1f2-35d28d6dc573" containerName="nova-cell0-conductor-db-sync" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.594954 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4708f947-ca39-46bf-b1f2-35d28d6dc573" containerName="nova-cell0-conductor-db-sync" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.595180 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4708f947-ca39-46bf-b1f2-35d28d6dc573" containerName="nova-cell0-conductor-db-sync" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.595907 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.598742 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.599845 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-42ml7" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.612277 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.709914 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl64w\" (UniqueName: \"kubernetes.io/projected/5cccac96-51b3-457e-86eb-bd59ce49b7cf-kube-api-access-kl64w\") pod \"nova-cell0-conductor-0\" (UID: \"5cccac96-51b3-457e-86eb-bd59ce49b7cf\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.710022 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cccac96-51b3-457e-86eb-bd59ce49b7cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cccac96-51b3-457e-86eb-bd59ce49b7cf\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.710136 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cccac96-51b3-457e-86eb-bd59ce49b7cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cccac96-51b3-457e-86eb-bd59ce49b7cf\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.812485 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cccac96-51b3-457e-86eb-bd59ce49b7cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cccac96-51b3-457e-86eb-bd59ce49b7cf\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.812617 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl64w\" (UniqueName: \"kubernetes.io/projected/5cccac96-51b3-457e-86eb-bd59ce49b7cf-kube-api-access-kl64w\") pod \"nova-cell0-conductor-0\" (UID: \"5cccac96-51b3-457e-86eb-bd59ce49b7cf\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.812702 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cccac96-51b3-457e-86eb-bd59ce49b7cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cccac96-51b3-457e-86eb-bd59ce49b7cf\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.817232 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cccac96-51b3-457e-86eb-bd59ce49b7cf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cccac96-51b3-457e-86eb-bd59ce49b7cf\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.818056 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cccac96-51b3-457e-86eb-bd59ce49b7cf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cccac96-51b3-457e-86eb-bd59ce49b7cf\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.838047 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl64w\" (UniqueName: \"kubernetes.io/projected/5cccac96-51b3-457e-86eb-bd59ce49b7cf-kube-api-access-kl64w\") pod \"nova-cell0-conductor-0\" (UID: \"5cccac96-51b3-457e-86eb-bd59ce49b7cf\") " pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:58 crc kubenswrapper[4833]: I0219 13:06:58.918078 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:59 crc kubenswrapper[4833]: I0219 13:06:59.184007 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 19 13:06:59 crc kubenswrapper[4833]: W0219 13:06:59.197740 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cccac96_51b3_457e_86eb_bd59ce49b7cf.slice/crio-78e181fc7d55c2fa6b47f2f82518051ce1d2870237ba85d44c1c994b1cb4d243 WatchSource:0}: Error finding container 78e181fc7d55c2fa6b47f2f82518051ce1d2870237ba85d44c1c994b1cb4d243: Status 404 returned error can't find the container with id 78e181fc7d55c2fa6b47f2f82518051ce1d2870237ba85d44c1c994b1cb4d243 Feb 19 13:06:59 crc kubenswrapper[4833]: I0219 13:06:59.448634 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5cccac96-51b3-457e-86eb-bd59ce49b7cf","Type":"ContainerStarted","Data":"8672568e91801a53e431270bfb4e3a23aad0f6ab9ecc40acf985c46805a9768f"} Feb 19 13:06:59 crc kubenswrapper[4833]: I0219 13:06:59.449358 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 19 13:06:59 crc kubenswrapper[4833]: I0219 13:06:59.449429 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5cccac96-51b3-457e-86eb-bd59ce49b7cf","Type":"ContainerStarted","Data":"78e181fc7d55c2fa6b47f2f82518051ce1d2870237ba85d44c1c994b1cb4d243"} Feb 19 13:06:59 crc kubenswrapper[4833]: I0219 13:06:59.470130 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.4701071319999999 podStartE2EDuration="1.470107132s" podCreationTimestamp="2026-02-19 13:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:06:59.465144388 +0000 UTC m=+1229.860663166" watchObservedRunningTime="2026-02-19 13:06:59.470107132 +0000 UTC m=+1229.865625900" Feb 19 13:07:01 crc kubenswrapper[4833]: I0219 13:07:01.627878 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 13:07:05 crc kubenswrapper[4833]: I0219 13:07:05.471204 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:07:05 crc kubenswrapper[4833]: I0219 13:07:05.472170 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4635da17-a051-4ef3-a8e3-f0dc7996cf17" containerName="kube-state-metrics" containerID="cri-o://d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7" gracePeriod=30 Feb 19 13:07:05 crc kubenswrapper[4833]: I0219 13:07:05.961937 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.060072 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwzpm\" (UniqueName: \"kubernetes.io/projected/4635da17-a051-4ef3-a8e3-f0dc7996cf17-kube-api-access-hwzpm\") pod \"4635da17-a051-4ef3-a8e3-f0dc7996cf17\" (UID: \"4635da17-a051-4ef3-a8e3-f0dc7996cf17\") " Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.067716 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4635da17-a051-4ef3-a8e3-f0dc7996cf17-kube-api-access-hwzpm" (OuterVolumeSpecName: "kube-api-access-hwzpm") pod "4635da17-a051-4ef3-a8e3-f0dc7996cf17" (UID: "4635da17-a051-4ef3-a8e3-f0dc7996cf17"). InnerVolumeSpecName "kube-api-access-hwzpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.162463 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwzpm\" (UniqueName: \"kubernetes.io/projected/4635da17-a051-4ef3-a8e3-f0dc7996cf17-kube-api-access-hwzpm\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.522959 4833 generic.go:334] "Generic (PLEG): container finished" podID="4635da17-a051-4ef3-a8e3-f0dc7996cf17" containerID="d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7" exitCode=2 Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.523039 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4635da17-a051-4ef3-a8e3-f0dc7996cf17","Type":"ContainerDied","Data":"d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7"} Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.523069 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.524377 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4635da17-a051-4ef3-a8e3-f0dc7996cf17","Type":"ContainerDied","Data":"6c2a150dbbce8d3db59a2f6157b2a397e68f0d2239701d268a463d1e71749218"} Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.524656 4833 scope.go:117] "RemoveContainer" containerID="d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.551706 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.557636 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.563513 4833 scope.go:117] "RemoveContainer" containerID="d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7" Feb 19 13:07:06 crc kubenswrapper[4833]: E0219 13:07:06.564262 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7\": container with ID starting with d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7 not found: ID does not exist" containerID="d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.565754 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7"} err="failed to get container status \"d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7\": rpc error: code = NotFound desc = could not find container \"d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7\": container with ID starting with d391efd0238898c77044c291911202dfb1d9bc42e27c3c9a007aa36c274ce1c7 not found: ID does not exist" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.575373 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:07:06 crc kubenswrapper[4833]: E0219 13:07:06.575769 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4635da17-a051-4ef3-a8e3-f0dc7996cf17" containerName="kube-state-metrics" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.575788 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4635da17-a051-4ef3-a8e3-f0dc7996cf17" containerName="kube-state-metrics" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.576010 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4635da17-a051-4ef3-a8e3-f0dc7996cf17" containerName="kube-state-metrics" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.578572 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.581314 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.584312 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.600778 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.676548 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7f4a85b-484c-414d-969f-58baa362a1ff-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.677011 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d7f4a85b-484c-414d-969f-58baa362a1ff-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.677103 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f4a85b-484c-414d-969f-58baa362a1ff-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.677138 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vspcb\" (UniqueName: \"kubernetes.io/projected/d7f4a85b-484c-414d-969f-58baa362a1ff-kube-api-access-vspcb\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.778686 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7f4a85b-484c-414d-969f-58baa362a1ff-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.778851 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d7f4a85b-484c-414d-969f-58baa362a1ff-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.778933 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f4a85b-484c-414d-969f-58baa362a1ff-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.778966 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vspcb\" (UniqueName: \"kubernetes.io/projected/d7f4a85b-484c-414d-969f-58baa362a1ff-kube-api-access-vspcb\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.785805 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d7f4a85b-484c-414d-969f-58baa362a1ff-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.787836 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7f4a85b-484c-414d-969f-58baa362a1ff-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.799989 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f4a85b-484c-414d-969f-58baa362a1ff-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.801804 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vspcb\" (UniqueName: \"kubernetes.io/projected/d7f4a85b-484c-414d-969f-58baa362a1ff-kube-api-access-vspcb\") pod \"kube-state-metrics-0\" (UID: \"d7f4a85b-484c-414d-969f-58baa362a1ff\") " pod="openstack/kube-state-metrics-0" Feb 19 13:07:06 crc kubenswrapper[4833]: I0219 13:07:06.912862 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 19 13:07:07 crc kubenswrapper[4833]: I0219 13:07:07.322137 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:07 crc kubenswrapper[4833]: I0219 13:07:07.322798 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="ceilometer-central-agent" containerID="cri-o://dcf54e4cdc78b40e62c3ae3515d072a6e94994455313f28cd6a6019a72bd8d7a" gracePeriod=30 Feb 19 13:07:07 crc kubenswrapper[4833]: I0219 13:07:07.323288 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="proxy-httpd" containerID="cri-o://0d73f3ffe69a520b27305f6cce65bedcd598f033c276f9c1caf76a9e06e67868" gracePeriod=30 Feb 19 13:07:07 crc kubenswrapper[4833]: I0219 13:07:07.323358 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="sg-core" containerID="cri-o://f334ac772d67a4815bfbfc4911bb5aca62ed2ca22349f15088bea6f7c7dd2fd8" gracePeriod=30 Feb 19 13:07:07 crc kubenswrapper[4833]: I0219 13:07:07.323407 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="ceilometer-notification-agent" containerID="cri-o://1456a8661612e3fa4a5e6ab44301c2445c0b0d1bcedfc37c4f5a3a7ac3372e30" gracePeriod=30 Feb 19 13:07:07 crc kubenswrapper[4833]: I0219 13:07:07.423330 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 19 13:07:07 crc kubenswrapper[4833]: I0219 13:07:07.533671 4833 generic.go:334] "Generic (PLEG): container finished" podID="87138863-564a-46e7-80b0-eef5a19d7977" containerID="0d73f3ffe69a520b27305f6cce65bedcd598f033c276f9c1caf76a9e06e67868" exitCode=0 Feb 19 13:07:07 crc kubenswrapper[4833]: I0219 13:07:07.533703 4833 generic.go:334] "Generic (PLEG): container finished" podID="87138863-564a-46e7-80b0-eef5a19d7977" containerID="f334ac772d67a4815bfbfc4911bb5aca62ed2ca22349f15088bea6f7c7dd2fd8" exitCode=2 Feb 19 13:07:07 crc kubenswrapper[4833]: I0219 13:07:07.533732 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87138863-564a-46e7-80b0-eef5a19d7977","Type":"ContainerDied","Data":"0d73f3ffe69a520b27305f6cce65bedcd598f033c276f9c1caf76a9e06e67868"} Feb 19 13:07:07 crc kubenswrapper[4833]: I0219 13:07:07.533752 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87138863-564a-46e7-80b0-eef5a19d7977","Type":"ContainerDied","Data":"f334ac772d67a4815bfbfc4911bb5aca62ed2ca22349f15088bea6f7c7dd2fd8"} Feb 19 13:07:07 crc kubenswrapper[4833]: I0219 13:07:07.535940 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d7f4a85b-484c-414d-969f-58baa362a1ff","Type":"ContainerStarted","Data":"6202891193dda725e293f511c6629b702601bb2198b39cfa8be136ff69e8bab6"} Feb 19 13:07:08 crc kubenswrapper[4833]: I0219 13:07:08.327983 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4635da17-a051-4ef3-a8e3-f0dc7996cf17" path="/var/lib/kubelet/pods/4635da17-a051-4ef3-a8e3-f0dc7996cf17/volumes" Feb 19 13:07:08 crc kubenswrapper[4833]: I0219 13:07:08.548191 4833 generic.go:334] "Generic (PLEG): container finished" podID="87138863-564a-46e7-80b0-eef5a19d7977" containerID="dcf54e4cdc78b40e62c3ae3515d072a6e94994455313f28cd6a6019a72bd8d7a" exitCode=0 Feb 19 13:07:08 crc kubenswrapper[4833]: I0219 13:07:08.548259 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87138863-564a-46e7-80b0-eef5a19d7977","Type":"ContainerDied","Data":"dcf54e4cdc78b40e62c3ae3515d072a6e94994455313f28cd6a6019a72bd8d7a"} Feb 19 13:07:08 crc kubenswrapper[4833]: I0219 13:07:08.550122 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d7f4a85b-484c-414d-969f-58baa362a1ff","Type":"ContainerStarted","Data":"eb279968453233a6836491095ec01e96f372a9f26388c7862f3a9da69ac31389"} Feb 19 13:07:08 crc kubenswrapper[4833]: I0219 13:07:08.550358 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 19 13:07:08 crc kubenswrapper[4833]: I0219 13:07:08.580841 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.227518264 podStartE2EDuration="2.580811624s" podCreationTimestamp="2026-02-19 13:07:06 +0000 UTC" firstStartedPulling="2026-02-19 13:07:07.424877542 +0000 UTC m=+1237.820396320" lastFinishedPulling="2026-02-19 13:07:07.778170912 +0000 UTC m=+1238.173689680" observedRunningTime="2026-02-19 13:07:08.564867604 +0000 UTC m=+1238.960386362" watchObservedRunningTime="2026-02-19 13:07:08.580811624 +0000 UTC m=+1238.976330432" Feb 19 13:07:08 crc kubenswrapper[4833]: I0219 13:07:08.952646 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.571739 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xf2cl"] Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.573126 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.578184 4833 generic.go:334] "Generic (PLEG): container finished" podID="87138863-564a-46e7-80b0-eef5a19d7977" containerID="1456a8661612e3fa4a5e6ab44301c2445c0b0d1bcedfc37c4f5a3a7ac3372e30" exitCode=0 Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.578917 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87138863-564a-46e7-80b0-eef5a19d7977","Type":"ContainerDied","Data":"1456a8661612e3fa4a5e6ab44301c2445c0b0d1bcedfc37c4f5a3a7ac3372e30"} Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.584439 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.584633 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.592338 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xf2cl"] Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.639529 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-config-data\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.639676 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.639853 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr5hv\" (UniqueName: \"kubernetes.io/projected/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-kube-api-access-cr5hv\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.640010 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-scripts\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.707905 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.709309 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.712668 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.737184 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.741144 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-combined-ca-bundle\") pod \"87138863-564a-46e7-80b0-eef5a19d7977\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.741289 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-config-data\") pod \"87138863-564a-46e7-80b0-eef5a19d7977\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.741323 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-sg-core-conf-yaml\") pod \"87138863-564a-46e7-80b0-eef5a19d7977\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.741355 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-run-httpd\") pod \"87138863-564a-46e7-80b0-eef5a19d7977\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.741488 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-scripts\") pod \"87138863-564a-46e7-80b0-eef5a19d7977\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.741552 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-log-httpd\") pod \"87138863-564a-46e7-80b0-eef5a19d7977\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.741593 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnfhv\" (UniqueName: \"kubernetes.io/projected/87138863-564a-46e7-80b0-eef5a19d7977-kube-api-access-nnfhv\") pod \"87138863-564a-46e7-80b0-eef5a19d7977\" (UID: \"87138863-564a-46e7-80b0-eef5a19d7977\") " Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.741982 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr5hv\" (UniqueName: \"kubernetes.io/projected/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-kube-api-access-cr5hv\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.742051 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-scripts\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.742112 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32201a7e-0023-449d-957a-974934357899-logs\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.742136 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-config-data\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.742162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.742188 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-config-data\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.742224 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.742257 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cjwv\" (UniqueName: \"kubernetes.io/projected/32201a7e-0023-449d-957a-974934357899-kube-api-access-4cjwv\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.745126 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "87138863-564a-46e7-80b0-eef5a19d7977" (UID: "87138863-564a-46e7-80b0-eef5a19d7977"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.745274 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "87138863-564a-46e7-80b0-eef5a19d7977" (UID: "87138863-564a-46e7-80b0-eef5a19d7977"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.751998 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.760007 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-scripts\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.769004 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.771767 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87138863-564a-46e7-80b0-eef5a19d7977-kube-api-access-nnfhv" (OuterVolumeSpecName: "kube-api-access-nnfhv") pod "87138863-564a-46e7-80b0-eef5a19d7977" (UID: "87138863-564a-46e7-80b0-eef5a19d7977"). InnerVolumeSpecName "kube-api-access-nnfhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.779296 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-config-data\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.781784 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-scripts" (OuterVolumeSpecName: "scripts") pod "87138863-564a-46e7-80b0-eef5a19d7977" (UID: "87138863-564a-46e7-80b0-eef5a19d7977"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.803331 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:07:09 crc kubenswrapper[4833]: E0219 13:07:09.803768 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="sg-core" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.803786 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="sg-core" Feb 19 13:07:09 crc kubenswrapper[4833]: E0219 13:07:09.803815 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="proxy-httpd" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.803821 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="proxy-httpd" Feb 19 13:07:09 crc kubenswrapper[4833]: E0219 13:07:09.803839 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="ceilometer-central-agent" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.803845 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="ceilometer-central-agent" Feb 19 13:07:09 crc kubenswrapper[4833]: E0219 13:07:09.803853 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="ceilometer-notification-agent" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.803859 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="ceilometer-notification-agent" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.804016 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="sg-core" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.804029 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="proxy-httpd" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.804042 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="ceilometer-notification-agent" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.804052 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="87138863-564a-46e7-80b0-eef5a19d7977" containerName="ceilometer-central-agent" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.804961 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.806045 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr5hv\" (UniqueName: \"kubernetes.io/projected/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-kube-api-access-cr5hv\") pod \"nova-cell0-cell-mapping-xf2cl\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.807679 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.859092 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.864737 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32201a7e-0023-449d-957a-974934357899-logs\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.864787 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a482f4-f946-4c01-8f41-e1fbecbc2f53-logs\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.864808 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmmm\" (UniqueName: \"kubernetes.io/projected/04a482f4-f946-4c01-8f41-e1fbecbc2f53-kube-api-access-spmmm\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.864830 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-config-data\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.864867 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.864930 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cjwv\" (UniqueName: \"kubernetes.io/projected/32201a7e-0023-449d-957a-974934357899-kube-api-access-4cjwv\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.865015 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-config-data\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.865073 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.865141 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.865152 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.865162 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnfhv\" (UniqueName: \"kubernetes.io/projected/87138863-564a-46e7-80b0-eef5a19d7977-kube-api-access-nnfhv\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.865172 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/87138863-564a-46e7-80b0-eef5a19d7977-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.865526 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32201a7e-0023-449d-957a-974934357899-logs\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.880108 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.881325 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-config-data\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.881872 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "87138863-564a-46e7-80b0-eef5a19d7977" (UID: "87138863-564a-46e7-80b0-eef5a19d7977"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.891427 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.902429 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cjwv\" (UniqueName: \"kubernetes.io/projected/32201a7e-0023-449d-957a-974934357899-kube-api-access-4cjwv\") pod \"nova-api-0\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " pod="openstack/nova-api-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.907642 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.909989 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.912602 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.923581 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-tmzpd"] Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.926147 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.968849 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-config-data\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.969449 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-svc\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.969588 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.969673 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.969805 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.969907 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-config-data\") pod \"nova-scheduler-0\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.970012 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8kgl\" (UniqueName: \"kubernetes.io/projected/835f7f54-2151-44af-897e-1f21f96b6924-kube-api-access-g8kgl\") pod \"nova-scheduler-0\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.970147 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a482f4-f946-4c01-8f41-e1fbecbc2f53-logs\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.970218 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spmmm\" (UniqueName: \"kubernetes.io/projected/04a482f4-f946-4c01-8f41-e1fbecbc2f53-kube-api-access-spmmm\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.970955 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkfmt\" (UniqueName: \"kubernetes.io/projected/1c84a4ef-18de-46c4-badf-8feafe986252-kube-api-access-kkfmt\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.971271 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.971678 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-config\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.971845 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.974307 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.973655 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.977435 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a482f4-f946-4c01-8f41-e1fbecbc2f53-logs\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.981441 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:09 crc kubenswrapper[4833]: I0219 13:07:09.985865 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-config-data\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.010140 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-tmzpd"] Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.017174 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spmmm\" (UniqueName: \"kubernetes.io/projected/04a482f4-f946-4c01-8f41-e1fbecbc2f53-kube-api-access-spmmm\") pod \"nova-metadata-0\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " pod="openstack/nova-metadata-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.029815 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87138863-564a-46e7-80b0-eef5a19d7977" (UID: "87138863-564a-46e7-80b0-eef5a19d7977"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.037086 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-config-data" (OuterVolumeSpecName: "config-data") pod "87138863-564a-46e7-80b0-eef5a19d7977" (UID: "87138863-564a-46e7-80b0-eef5a19d7977"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.055444 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.056670 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.059940 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.076361 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.078091 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-config\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.078201 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.078293 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-svc\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.078345 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.078394 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.078427 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-config-data\") pod \"nova-scheduler-0\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.078454 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8kgl\" (UniqueName: \"kubernetes.io/projected/835f7f54-2151-44af-897e-1f21f96b6924-kube-api-access-g8kgl\") pod \"nova-scheduler-0\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.078506 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkfmt\" (UniqueName: \"kubernetes.io/projected/1c84a4ef-18de-46c4-badf-8feafe986252-kube-api-access-kkfmt\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.078537 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.078649 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.078671 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87138863-564a-46e7-80b0-eef5a19d7977-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.079582 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-config\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.079973 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.080553 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.080617 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.081207 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-svc\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.084971 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-config-data\") pod \"nova-scheduler-0\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.085162 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.098628 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8kgl\" (UniqueName: \"kubernetes.io/projected/835f7f54-2151-44af-897e-1f21f96b6924-kube-api-access-g8kgl\") pod \"nova-scheduler-0\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.105564 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkfmt\" (UniqueName: \"kubernetes.io/projected/1c84a4ef-18de-46c4-badf-8feafe986252-kube-api-access-kkfmt\") pod \"dnsmasq-dns-bccf8f775-tmzpd\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.160885 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.180455 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrcq\" (UniqueName: \"kubernetes.io/projected/5237bb0f-1ee7-451b-9457-d72c90993794-kube-api-access-dzrcq\") pod \"nova-cell1-novncproxy-0\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.180563 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.180628 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.190142 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.243336 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.253530 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.284231 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.284581 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzrcq\" (UniqueName: \"kubernetes.io/projected/5237bb0f-1ee7-451b-9457-d72c90993794-kube-api-access-dzrcq\") pod \"nova-cell1-novncproxy-0\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.284659 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.298861 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.298909 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.313369 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzrcq\" (UniqueName: \"kubernetes.io/projected/5237bb0f-1ee7-451b-9457-d72c90993794-kube-api-access-dzrcq\") pod \"nova-cell1-novncproxy-0\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.392115 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.457141 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xf2cl"] Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.596841 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"87138863-564a-46e7-80b0-eef5a19d7977","Type":"ContainerDied","Data":"bd6851d2ab750cda050f2847a87de5ae3627186835976cb8c5cbe19d98e15e21"} Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.597125 4833 scope.go:117] "RemoveContainer" containerID="0d73f3ffe69a520b27305f6cce65bedcd598f033c276f9c1caf76a9e06e67868" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.597181 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.602755 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xf2cl" event={"ID":"3e20c111-d37e-4ab6-8096-92fa51a4c4f8","Type":"ContainerStarted","Data":"39bbc9a8650bca5a1a70eed6f71cba5ef4445045d13c3301f7db8b9916f11299"} Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.640393 4833 scope.go:117] "RemoveContainer" containerID="f334ac772d67a4815bfbfc4911bb5aca62ed2ca22349f15088bea6f7c7dd2fd8" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.661866 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.673859 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.682660 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.684758 4833 scope.go:117] "RemoveContainer" containerID="1456a8661612e3fa4a5e6ab44301c2445c0b0d1bcedfc37c4f5a3a7ac3372e30" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.684964 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.687722 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.687747 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.687976 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.694985 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xgh42"] Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.696176 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.700062 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.700209 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.714606 4833 scope.go:117] "RemoveContainer" containerID="dcf54e4cdc78b40e62c3ae3515d072a6e94994455313f28cd6a6019a72bd8d7a" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.720131 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.725991 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.733417 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xgh42"] Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823667 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qf8l\" (UniqueName: \"kubernetes.io/projected/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-kube-api-access-2qf8l\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823756 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-log-httpd\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823775 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8hzk\" (UniqueName: \"kubernetes.io/projected/ab2a9f14-787f-4834-a8e0-f4b55638492d-kube-api-access-z8hzk\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823818 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-scripts\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823834 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-scripts\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823857 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823871 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-run-httpd\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823890 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-config-data\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823921 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-config-data\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823950 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.823971 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.869833 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:07:10 crc kubenswrapper[4833]: W0219 13:07:10.885629 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04a482f4_f946_4c01_8f41_e1fbecbc2f53.slice/crio-6910f713f43130c849bd2d4c0f77c71934d918ca84e2191d423c066144f71f09 WatchSource:0}: Error finding container 6910f713f43130c849bd2d4c0f77c71934d918ca84e2191d423c066144f71f09: Status 404 returned error can't find the container with id 6910f713f43130c849bd2d4c0f77c71934d918ca84e2191d423c066144f71f09 Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.925910 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.925968 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.926239 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qf8l\" (UniqueName: \"kubernetes.io/projected/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-kube-api-access-2qf8l\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.926265 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.926302 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-log-httpd\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.926328 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8hzk\" (UniqueName: \"kubernetes.io/projected/ab2a9f14-787f-4834-a8e0-f4b55638492d-kube-api-access-z8hzk\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.926386 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-scripts\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.926409 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-scripts\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.926440 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.926456 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-run-httpd\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.926475 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-config-data\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.926521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-config-data\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.926893 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-log-httpd\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.942468 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-run-httpd\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.947372 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.953614 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.956201 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-scripts\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.959950 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.960122 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-config-data\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.963205 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qf8l\" (UniqueName: \"kubernetes.io/projected/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-kube-api-access-2qf8l\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.968066 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.971162 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-scripts\") pod \"nova-cell1-conductor-db-sync-xgh42\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.980295 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8hzk\" (UniqueName: \"kubernetes.io/projected/ab2a9f14-787f-4834-a8e0-f4b55638492d-kube-api-access-z8hzk\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:10 crc kubenswrapper[4833]: I0219 13:07:10.982680 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-config-data\") pod \"ceilometer-0\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " pod="openstack/ceilometer-0" Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.012965 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.037746 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.047108 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-tmzpd"] Feb 19 13:07:11 crc kubenswrapper[4833]: W0219 13:07:11.125331 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5237bb0f_1ee7_451b_9457_d72c90993794.slice/crio-d4883cf8be2978fed43a86e74810bccfb3f323a41ed1d9b61c4cd0039aa326c8 WatchSource:0}: Error finding container d4883cf8be2978fed43a86e74810bccfb3f323a41ed1d9b61c4cd0039aa326c8: Status 404 returned error can't find the container with id d4883cf8be2978fed43a86e74810bccfb3f323a41ed1d9b61c4cd0039aa326c8 Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.126454 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.177783 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.521018 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.606266 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xgh42"] Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.621336 4833 generic.go:334] "Generic (PLEG): container finished" podID="1c84a4ef-18de-46c4-badf-8feafe986252" containerID="085be12acf38eb8a2e5a6c1648d66d61ece278b10a676ec46c5b0a80078722f8" exitCode=0 Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.621407 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" event={"ID":"1c84a4ef-18de-46c4-badf-8feafe986252","Type":"ContainerDied","Data":"085be12acf38eb8a2e5a6c1648d66d61ece278b10a676ec46c5b0a80078722f8"} Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.621435 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" event={"ID":"1c84a4ef-18de-46c4-badf-8feafe986252","Type":"ContainerStarted","Data":"be85ae0c0c49c6bc75171bd01f037978dd08960bea1a55492c2b88736fc9c98b"} Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.629058 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04a482f4-f946-4c01-8f41-e1fbecbc2f53","Type":"ContainerStarted","Data":"6910f713f43130c849bd2d4c0f77c71934d918ca84e2191d423c066144f71f09"} Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.634410 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32201a7e-0023-449d-957a-974934357899","Type":"ContainerStarted","Data":"b521ed24a7d5252fd031fc06e53d40cc78c3cc5e057ecd931bf3bad633755a5b"} Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.640147 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xf2cl" event={"ID":"3e20c111-d37e-4ab6-8096-92fa51a4c4f8","Type":"ContainerStarted","Data":"0c4e8e971796a06a16ae30f06e68b72ceaf71bc2d6404a5f5b1ddc3366307f00"} Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.641528 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"835f7f54-2151-44af-897e-1f21f96b6924","Type":"ContainerStarted","Data":"4056a53c2d84cb7e0b3065838bf24b8ffb26353aa6a977ec45403651e9b2e6b1"} Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.662363 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a9f14-787f-4834-a8e0-f4b55638492d","Type":"ContainerStarted","Data":"339cbdf4cd7c2c36f9a40bed6d0149c0ade08d8b1ea8d874827a170519faa3e6"} Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.672579 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5237bb0f-1ee7-451b-9457-d72c90993794","Type":"ContainerStarted","Data":"d4883cf8be2978fed43a86e74810bccfb3f323a41ed1d9b61c4cd0039aa326c8"} Feb 19 13:07:11 crc kubenswrapper[4833]: I0219 13:07:11.681702 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xf2cl" podStartSLOduration=2.681687213 podStartE2EDuration="2.681687213s" podCreationTimestamp="2026-02-19 13:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:07:11.679304548 +0000 UTC m=+1242.074823316" watchObservedRunningTime="2026-02-19 13:07:11.681687213 +0000 UTC m=+1242.077205971" Feb 19 13:07:12 crc kubenswrapper[4833]: I0219 13:07:12.330462 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87138863-564a-46e7-80b0-eef5a19d7977" path="/var/lib/kubelet/pods/87138863-564a-46e7-80b0-eef5a19d7977/volumes" Feb 19 13:07:12 crc kubenswrapper[4833]: I0219 13:07:12.690669 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a9f14-787f-4834-a8e0-f4b55638492d","Type":"ContainerStarted","Data":"4c815edfdeac571082a304d3654501a286d20accf557521f71996c7e5f169fe6"} Feb 19 13:07:12 crc kubenswrapper[4833]: I0219 13:07:12.696156 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xgh42" event={"ID":"d9bfbaad-5149-4e04-baac-a9d0c0c569d0","Type":"ContainerStarted","Data":"3be9148bd920a373f0eac2d2a3fd31c80ddfd2bf7deea6c915740a03c089743c"} Feb 19 13:07:12 crc kubenswrapper[4833]: I0219 13:07:12.696310 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xgh42" event={"ID":"d9bfbaad-5149-4e04-baac-a9d0c0c569d0","Type":"ContainerStarted","Data":"33d893df1e7f0759a7aa39829edd25c921ac364a5b9a6f86e0ee9b4e0f0b03e6"} Feb 19 13:07:12 crc kubenswrapper[4833]: I0219 13:07:12.702961 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" event={"ID":"1c84a4ef-18de-46c4-badf-8feafe986252","Type":"ContainerStarted","Data":"b5946dd025acb1fa3f7ac2c615490f0f33a9fd0750c6b0d31837c3b5a28ed6de"} Feb 19 13:07:12 crc kubenswrapper[4833]: I0219 13:07:12.703362 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:12 crc kubenswrapper[4833]: I0219 13:07:12.720340 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-xgh42" podStartSLOduration=2.720319837 podStartE2EDuration="2.720319837s" podCreationTimestamp="2026-02-19 13:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:07:12.715081783 +0000 UTC m=+1243.110600551" watchObservedRunningTime="2026-02-19 13:07:12.720319837 +0000 UTC m=+1243.115838605" Feb 19 13:07:13 crc kubenswrapper[4833]: I0219 13:07:13.058143 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" podStartSLOduration=4.05811755 podStartE2EDuration="4.05811755s" podCreationTimestamp="2026-02-19 13:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:07:12.74723871 +0000 UTC m=+1243.142757498" watchObservedRunningTime="2026-02-19 13:07:13.05811755 +0000 UTC m=+1243.453636318" Feb 19 13:07:13 crc kubenswrapper[4833]: I0219 13:07:13.062079 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:07:13 crc kubenswrapper[4833]: I0219 13:07:13.079055 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.720450 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"835f7f54-2151-44af-897e-1f21f96b6924","Type":"ContainerStarted","Data":"75a06a78f0ed1724c5ed0638c1775e390abd52c30addd63ea09f0e20c5043e49"} Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.722968 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a9f14-787f-4834-a8e0-f4b55638492d","Type":"ContainerStarted","Data":"eb948ff3c7dcc71ef558a3c1afb6237b02ca8703270af3fbde4bc0715d8181c7"} Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.724855 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5237bb0f-1ee7-451b-9457-d72c90993794","Type":"ContainerStarted","Data":"18f780012a1ff5f71753c17c35ca61e1791a3e0203bdcd77eae2c76e2c2f5be6"} Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.724882 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="5237bb0f-1ee7-451b-9457-d72c90993794" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://18f780012a1ff5f71753c17c35ca61e1791a3e0203bdcd77eae2c76e2c2f5be6" gracePeriod=30 Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.727597 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04a482f4-f946-4c01-8f41-e1fbecbc2f53","Type":"ContainerStarted","Data":"83654509cf5a8b8576a81829893689a91f1b0ab33206d9ce48e66c3cd5e29d76"} Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.727654 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04a482f4-f946-4c01-8f41-e1fbecbc2f53","Type":"ContainerStarted","Data":"5c250c600a82fa894d17e179e45465106442605b6a769ce4eb2bf8134b8be7ae"} Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.727801 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="04a482f4-f946-4c01-8f41-e1fbecbc2f53" containerName="nova-metadata-log" containerID="cri-o://5c250c600a82fa894d17e179e45465106442605b6a769ce4eb2bf8134b8be7ae" gracePeriod=30 Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.727920 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="04a482f4-f946-4c01-8f41-e1fbecbc2f53" containerName="nova-metadata-metadata" containerID="cri-o://83654509cf5a8b8576a81829893689a91f1b0ab33206d9ce48e66c3cd5e29d76" gracePeriod=30 Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.735129 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32201a7e-0023-449d-957a-974934357899","Type":"ContainerStarted","Data":"563715ad6f6398546ac35d27869876b072c2473ace9725a1602890cd511f5898"} Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.735163 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32201a7e-0023-449d-957a-974934357899","Type":"ContainerStarted","Data":"8ad97adb9e8b776d062bbe5524d1979b382ebfade3c23dbbbe218fb079f16b17"} Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.745523 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.98991132 podStartE2EDuration="5.74550791s" podCreationTimestamp="2026-02-19 13:07:09 +0000 UTC" firstStartedPulling="2026-02-19 13:07:11.199347241 +0000 UTC m=+1241.594866009" lastFinishedPulling="2026-02-19 13:07:13.954943831 +0000 UTC m=+1244.350462599" observedRunningTime="2026-02-19 13:07:14.739748891 +0000 UTC m=+1245.135267659" watchObservedRunningTime="2026-02-19 13:07:14.74550791 +0000 UTC m=+1245.141026678" Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.761754 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.960827258 podStartE2EDuration="5.761736578s" podCreationTimestamp="2026-02-19 13:07:09 +0000 UTC" firstStartedPulling="2026-02-19 13:07:11.155233714 +0000 UTC m=+1241.550752482" lastFinishedPulling="2026-02-19 13:07:13.956143034 +0000 UTC m=+1244.351661802" observedRunningTime="2026-02-19 13:07:14.757310906 +0000 UTC m=+1245.152829674" watchObservedRunningTime="2026-02-19 13:07:14.761736578 +0000 UTC m=+1245.157255346" Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.817634 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.750668847 podStartE2EDuration="5.81761235s" podCreationTimestamp="2026-02-19 13:07:09 +0000 UTC" firstStartedPulling="2026-02-19 13:07:10.890675462 +0000 UTC m=+1241.286194230" lastFinishedPulling="2026-02-19 13:07:13.957618955 +0000 UTC m=+1244.353137733" observedRunningTime="2026-02-19 13:07:14.791081508 +0000 UTC m=+1245.186600276" watchObservedRunningTime="2026-02-19 13:07:14.81761235 +0000 UTC m=+1245.213131118" Feb 19 13:07:14 crc kubenswrapper[4833]: I0219 13:07:14.851848 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.629606276 podStartE2EDuration="5.851828304s" podCreationTimestamp="2026-02-19 13:07:09 +0000 UTC" firstStartedPulling="2026-02-19 13:07:10.715603191 +0000 UTC m=+1241.111121959" lastFinishedPulling="2026-02-19 13:07:13.937825219 +0000 UTC m=+1244.333343987" observedRunningTime="2026-02-19 13:07:14.835559045 +0000 UTC m=+1245.231077823" watchObservedRunningTime="2026-02-19 13:07:14.851828304 +0000 UTC m=+1245.247347072" Feb 19 13:07:15 crc kubenswrapper[4833]: I0219 13:07:15.191934 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:07:15 crc kubenswrapper[4833]: I0219 13:07:15.192374 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:07:15 crc kubenswrapper[4833]: I0219 13:07:15.244102 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 13:07:15 crc kubenswrapper[4833]: I0219 13:07:15.393675 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:15 crc kubenswrapper[4833]: I0219 13:07:15.748722 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a9f14-787f-4834-a8e0-f4b55638492d","Type":"ContainerStarted","Data":"c2af436d770e2002ae73f88ea66fed8d69627077d9df14bc1d3e219e462beff7"} Feb 19 13:07:15 crc kubenswrapper[4833]: I0219 13:07:15.751735 4833 generic.go:334] "Generic (PLEG): container finished" podID="04a482f4-f946-4c01-8f41-e1fbecbc2f53" containerID="5c250c600a82fa894d17e179e45465106442605b6a769ce4eb2bf8134b8be7ae" exitCode=143 Feb 19 13:07:15 crc kubenswrapper[4833]: I0219 13:07:15.751799 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04a482f4-f946-4c01-8f41-e1fbecbc2f53","Type":"ContainerDied","Data":"5c250c600a82fa894d17e179e45465106442605b6a769ce4eb2bf8134b8be7ae"} Feb 19 13:07:16 crc kubenswrapper[4833]: I0219 13:07:16.927518 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 19 13:07:17 crc kubenswrapper[4833]: I0219 13:07:17.774639 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a9f14-787f-4834-a8e0-f4b55638492d","Type":"ContainerStarted","Data":"74388ebf2fed1ae8fe55614e555d106c2e812c8c1e65c439bfd6a79cc3539c15"} Feb 19 13:07:17 crc kubenswrapper[4833]: I0219 13:07:17.775158 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:07:17 crc kubenswrapper[4833]: I0219 13:07:17.800392 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.256812236 podStartE2EDuration="7.800371119s" podCreationTimestamp="2026-02-19 13:07:10 +0000 UTC" firstStartedPulling="2026-02-19 13:07:11.543283274 +0000 UTC m=+1241.938802042" lastFinishedPulling="2026-02-19 13:07:17.086842157 +0000 UTC m=+1247.482360925" observedRunningTime="2026-02-19 13:07:17.793677145 +0000 UTC m=+1248.189195923" watchObservedRunningTime="2026-02-19 13:07:17.800371119 +0000 UTC m=+1248.195889887" Feb 19 13:07:18 crc kubenswrapper[4833]: I0219 13:07:18.796662 4833 generic.go:334] "Generic (PLEG): container finished" podID="3e20c111-d37e-4ab6-8096-92fa51a4c4f8" containerID="0c4e8e971796a06a16ae30f06e68b72ceaf71bc2d6404a5f5b1ddc3366307f00" exitCode=0 Feb 19 13:07:18 crc kubenswrapper[4833]: I0219 13:07:18.798947 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xf2cl" event={"ID":"3e20c111-d37e-4ab6-8096-92fa51a4c4f8","Type":"ContainerDied","Data":"0c4e8e971796a06a16ae30f06e68b72ceaf71bc2d6404a5f5b1ddc3366307f00"} Feb 19 13:07:19 crc kubenswrapper[4833]: I0219 13:07:19.812831 4833 generic.go:334] "Generic (PLEG): container finished" podID="d9bfbaad-5149-4e04-baac-a9d0c0c569d0" containerID="3be9148bd920a373f0eac2d2a3fd31c80ddfd2bf7deea6c915740a03c089743c" exitCode=0 Feb 19 13:07:19 crc kubenswrapper[4833]: I0219 13:07:19.812903 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xgh42" event={"ID":"d9bfbaad-5149-4e04-baac-a9d0c0c569d0","Type":"ContainerDied","Data":"3be9148bd920a373f0eac2d2a3fd31c80ddfd2bf7deea6c915740a03c089743c"} Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.161818 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.162236 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.194702 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.244550 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.256095 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.293196 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.306882 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-scripts\") pod \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.307011 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr5hv\" (UniqueName: \"kubernetes.io/projected/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-kube-api-access-cr5hv\") pod \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.307226 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-config-data\") pod \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.307312 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-combined-ca-bundle\") pod \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\" (UID: \"3e20c111-d37e-4ab6-8096-92fa51a4c4f8\") " Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.317524 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-scripts" (OuterVolumeSpecName: "scripts") pod "3e20c111-d37e-4ab6-8096-92fa51a4c4f8" (UID: "3e20c111-d37e-4ab6-8096-92fa51a4c4f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.326801 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-kube-api-access-cr5hv" (OuterVolumeSpecName: "kube-api-access-cr5hv") pod "3e20c111-d37e-4ab6-8096-92fa51a4c4f8" (UID: "3e20c111-d37e-4ab6-8096-92fa51a4c4f8"). InnerVolumeSpecName "kube-api-access-cr5hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.350736 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-config-data" (OuterVolumeSpecName: "config-data") pod "3e20c111-d37e-4ab6-8096-92fa51a4c4f8" (UID: "3e20c111-d37e-4ab6-8096-92fa51a4c4f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.374597 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e20c111-d37e-4ab6-8096-92fa51a4c4f8" (UID: "3e20c111-d37e-4ab6-8096-92fa51a4c4f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.410030 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.410063 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.410073 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.410082 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr5hv\" (UniqueName: \"kubernetes.io/projected/3e20c111-d37e-4ab6-8096-92fa51a4c4f8-kube-api-access-cr5hv\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.416449 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-scljw"] Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.416684 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-scljw" podUID="1dcaf371-d64d-457f-859b-7815899d3450" containerName="dnsmasq-dns" containerID="cri-o://110c9ce2e716acbaa3c882a3a8195dd2a30b6cd06773f6928b346706c8abb09a" gracePeriod=10 Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.823029 4833 generic.go:334] "Generic (PLEG): container finished" podID="1dcaf371-d64d-457f-859b-7815899d3450" containerID="110c9ce2e716acbaa3c882a3a8195dd2a30b6cd06773f6928b346706c8abb09a" exitCode=0 Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.823195 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-scljw" event={"ID":"1dcaf371-d64d-457f-859b-7815899d3450","Type":"ContainerDied","Data":"110c9ce2e716acbaa3c882a3a8195dd2a30b6cd06773f6928b346706c8abb09a"} Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.825190 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xf2cl" event={"ID":"3e20c111-d37e-4ab6-8096-92fa51a4c4f8","Type":"ContainerDied","Data":"39bbc9a8650bca5a1a70eed6f71cba5ef4445045d13c3301f7db8b9916f11299"} Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.825224 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39bbc9a8650bca5a1a70eed6f71cba5ef4445045d13c3301f7db8b9916f11299" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.825800 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xf2cl" Feb 19 13:07:20 crc kubenswrapper[4833]: I0219 13:07:20.916814 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.029900 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.093117 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.093324 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="32201a7e-0023-449d-957a-974934357899" containerName="nova-api-log" containerID="cri-o://8ad97adb9e8b776d062bbe5524d1979b382ebfade3c23dbbbe218fb079f16b17" gracePeriod=30 Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.093454 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="32201a7e-0023-449d-957a-974934357899" containerName="nova-api-api" containerID="cri-o://563715ad6f6398546ac35d27869876b072c2473ace9725a1602890cd511f5898" gracePeriod=30 Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.108687 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="32201a7e-0023-449d-957a-974934357899" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.108966 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="32201a7e-0023-449d-957a-974934357899" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": EOF" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.122332 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-sb\") pod \"1dcaf371-d64d-457f-859b-7815899d3450\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.122429 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-svc\") pod \"1dcaf371-d64d-457f-859b-7815899d3450\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.122510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g89ds\" (UniqueName: \"kubernetes.io/projected/1dcaf371-d64d-457f-859b-7815899d3450-kube-api-access-g89ds\") pod \"1dcaf371-d64d-457f-859b-7815899d3450\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.122533 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-swift-storage-0\") pod \"1dcaf371-d64d-457f-859b-7815899d3450\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.122638 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-nb\") pod \"1dcaf371-d64d-457f-859b-7815899d3450\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.122664 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-config\") pod \"1dcaf371-d64d-457f-859b-7815899d3450\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.128141 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dcaf371-d64d-457f-859b-7815899d3450-kube-api-access-g89ds" (OuterVolumeSpecName: "kube-api-access-g89ds") pod "1dcaf371-d64d-457f-859b-7815899d3450" (UID: "1dcaf371-d64d-457f-859b-7815899d3450"). InnerVolumeSpecName "kube-api-access-g89ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.211186 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dcaf371-d64d-457f-859b-7815899d3450" (UID: "1dcaf371-d64d-457f-859b-7815899d3450"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.227034 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1dcaf371-d64d-457f-859b-7815899d3450" (UID: "1dcaf371-d64d-457f-859b-7815899d3450"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.228017 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-nb\") pod \"1dcaf371-d64d-457f-859b-7815899d3450\" (UID: \"1dcaf371-d64d-457f-859b-7815899d3450\") " Feb 19 13:07:21 crc kubenswrapper[4833]: W0219 13:07:21.228145 4833 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1dcaf371-d64d-457f-859b-7815899d3450/volumes/kubernetes.io~configmap/ovsdbserver-nb Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.228173 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1dcaf371-d64d-457f-859b-7815899d3450" (UID: "1dcaf371-d64d-457f-859b-7815899d3450"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.228666 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g89ds\" (UniqueName: \"kubernetes.io/projected/1dcaf371-d64d-457f-859b-7815899d3450-kube-api-access-g89ds\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.228702 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.228740 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.262597 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1dcaf371-d64d-457f-859b-7815899d3450" (UID: "1dcaf371-d64d-457f-859b-7815899d3450"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.307229 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1dcaf371-d64d-457f-859b-7815899d3450" (UID: "1dcaf371-d64d-457f-859b-7815899d3450"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.330531 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.330564 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.338274 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-config" (OuterVolumeSpecName: "config") pod "1dcaf371-d64d-457f-859b-7815899d3450" (UID: "1dcaf371-d64d-457f-859b-7815899d3450"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.349125 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.432223 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-combined-ca-bundle\") pod \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.432319 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-config-data\") pod \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.432428 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-scripts\") pod \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.432615 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qf8l\" (UniqueName: \"kubernetes.io/projected/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-kube-api-access-2qf8l\") pod \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\" (UID: \"d9bfbaad-5149-4e04-baac-a9d0c0c569d0\") " Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.433122 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dcaf371-d64d-457f-859b-7815899d3450-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.448588 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-kube-api-access-2qf8l" (OuterVolumeSpecName: "kube-api-access-2qf8l") pod "d9bfbaad-5149-4e04-baac-a9d0c0c569d0" (UID: "d9bfbaad-5149-4e04-baac-a9d0c0c569d0"). InnerVolumeSpecName "kube-api-access-2qf8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.454696 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-scripts" (OuterVolumeSpecName: "scripts") pod "d9bfbaad-5149-4e04-baac-a9d0c0c569d0" (UID: "d9bfbaad-5149-4e04-baac-a9d0c0c569d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.461656 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9bfbaad-5149-4e04-baac-a9d0c0c569d0" (UID: "d9bfbaad-5149-4e04-baac-a9d0c0c569d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.478975 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-config-data" (OuterVolumeSpecName: "config-data") pod "d9bfbaad-5149-4e04-baac-a9d0c0c569d0" (UID: "d9bfbaad-5149-4e04-baac-a9d0c0c569d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.534697 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qf8l\" (UniqueName: \"kubernetes.io/projected/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-kube-api-access-2qf8l\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.534732 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.534741 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.534749 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d9bfbaad-5149-4e04-baac-a9d0c0c569d0-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.549768 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.836925 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-xgh42" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.836926 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-xgh42" event={"ID":"d9bfbaad-5149-4e04-baac-a9d0c0c569d0","Type":"ContainerDied","Data":"33d893df1e7f0759a7aa39829edd25c921ac364a5b9a6f86e0ee9b4e0f0b03e6"} Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.837003 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d893df1e7f0759a7aa39829edd25c921ac364a5b9a6f86e0ee9b4e0f0b03e6" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.839387 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-scljw" event={"ID":"1dcaf371-d64d-457f-859b-7815899d3450","Type":"ContainerDied","Data":"7cac59d2f9f4b70c3e7595fd5250a17042a91b2acc41ca06b97dfa3f76368b0e"} Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.839439 4833 scope.go:117] "RemoveContainer" containerID="110c9ce2e716acbaa3c882a3a8195dd2a30b6cd06773f6928b346706c8abb09a" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.839621 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-scljw" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.846055 4833 generic.go:334] "Generic (PLEG): container finished" podID="32201a7e-0023-449d-957a-974934357899" containerID="8ad97adb9e8b776d062bbe5524d1979b382ebfade3c23dbbbe218fb079f16b17" exitCode=143 Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.846753 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32201a7e-0023-449d-957a-974934357899","Type":"ContainerDied","Data":"8ad97adb9e8b776d062bbe5524d1979b382ebfade3c23dbbbe218fb079f16b17"} Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.865837 4833 scope.go:117] "RemoveContainer" containerID="2db8f9265dee8cd82a63837bc7e389bdad76a4fb25b9d7a0515902af0499ef33" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.896603 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-scljw"] Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.905785 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-scljw"] Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.937338 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 13:07:21 crc kubenswrapper[4833]: E0219 13:07:21.937824 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9bfbaad-5149-4e04-baac-a9d0c0c569d0" containerName="nova-cell1-conductor-db-sync" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.937847 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9bfbaad-5149-4e04-baac-a9d0c0c569d0" containerName="nova-cell1-conductor-db-sync" Feb 19 13:07:21 crc kubenswrapper[4833]: E0219 13:07:21.937872 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e20c111-d37e-4ab6-8096-92fa51a4c4f8" containerName="nova-manage" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.937880 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e20c111-d37e-4ab6-8096-92fa51a4c4f8" containerName="nova-manage" Feb 19 13:07:21 crc kubenswrapper[4833]: E0219 13:07:21.937899 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dcaf371-d64d-457f-859b-7815899d3450" containerName="dnsmasq-dns" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.937907 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dcaf371-d64d-457f-859b-7815899d3450" containerName="dnsmasq-dns" Feb 19 13:07:21 crc kubenswrapper[4833]: E0219 13:07:21.937926 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dcaf371-d64d-457f-859b-7815899d3450" containerName="init" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.937934 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dcaf371-d64d-457f-859b-7815899d3450" containerName="init" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.938149 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e20c111-d37e-4ab6-8096-92fa51a4c4f8" containerName="nova-manage" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.938171 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dcaf371-d64d-457f-859b-7815899d3450" containerName="dnsmasq-dns" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.938192 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9bfbaad-5149-4e04-baac-a9d0c0c569d0" containerName="nova-cell1-conductor-db-sync" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.938910 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.941472 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 19 13:07:21 crc kubenswrapper[4833]: I0219 13:07:21.948144 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.044049 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff86803-41bf-463e-a5ea-30f70425a39a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cff86803-41bf-463e-a5ea-30f70425a39a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.044188 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff86803-41bf-463e-a5ea-30f70425a39a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cff86803-41bf-463e-a5ea-30f70425a39a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.044417 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkdpv\" (UniqueName: \"kubernetes.io/projected/cff86803-41bf-463e-a5ea-30f70425a39a-kube-api-access-gkdpv\") pod \"nova-cell1-conductor-0\" (UID: \"cff86803-41bf-463e-a5ea-30f70425a39a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.145899 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkdpv\" (UniqueName: \"kubernetes.io/projected/cff86803-41bf-463e-a5ea-30f70425a39a-kube-api-access-gkdpv\") pod \"nova-cell1-conductor-0\" (UID: \"cff86803-41bf-463e-a5ea-30f70425a39a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.146022 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff86803-41bf-463e-a5ea-30f70425a39a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cff86803-41bf-463e-a5ea-30f70425a39a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.146097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff86803-41bf-463e-a5ea-30f70425a39a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cff86803-41bf-463e-a5ea-30f70425a39a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.153355 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cff86803-41bf-463e-a5ea-30f70425a39a-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cff86803-41bf-463e-a5ea-30f70425a39a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.154731 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cff86803-41bf-463e-a5ea-30f70425a39a-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cff86803-41bf-463e-a5ea-30f70425a39a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.166165 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkdpv\" (UniqueName: \"kubernetes.io/projected/cff86803-41bf-463e-a5ea-30f70425a39a-kube-api-access-gkdpv\") pod \"nova-cell1-conductor-0\" (UID: \"cff86803-41bf-463e-a5ea-30f70425a39a\") " pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.259377 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.326456 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dcaf371-d64d-457f-859b-7815899d3450" path="/var/lib/kubelet/pods/1dcaf371-d64d-457f-859b-7815899d3450/volumes" Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.727997 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.868577 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="835f7f54-2151-44af-897e-1f21f96b6924" containerName="nova-scheduler-scheduler" containerID="cri-o://75a06a78f0ed1724c5ed0638c1775e390abd52c30addd63ea09f0e20c5043e49" gracePeriod=30 Feb 19 13:07:22 crc kubenswrapper[4833]: I0219 13:07:22.868824 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cff86803-41bf-463e-a5ea-30f70425a39a","Type":"ContainerStarted","Data":"3242b1aed8f5c9039009773370efd573a63984e78c63d43ad4d52d27a9f47145"} Feb 19 13:07:23 crc kubenswrapper[4833]: I0219 13:07:23.910765 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cff86803-41bf-463e-a5ea-30f70425a39a","Type":"ContainerStarted","Data":"42a37b8c35bc997c41e1088498e7a29ded8bc59144d518c6a4d359822057f4fe"} Feb 19 13:07:23 crc kubenswrapper[4833]: I0219 13:07:23.912591 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:25 crc kubenswrapper[4833]: E0219 13:07:25.251829 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75a06a78f0ed1724c5ed0638c1775e390abd52c30addd63ea09f0e20c5043e49" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 13:07:25 crc kubenswrapper[4833]: E0219 13:07:25.255376 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75a06a78f0ed1724c5ed0638c1775e390abd52c30addd63ea09f0e20c5043e49" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 13:07:25 crc kubenswrapper[4833]: E0219 13:07:25.257456 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75a06a78f0ed1724c5ed0638c1775e390abd52c30addd63ea09f0e20c5043e49" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 13:07:25 crc kubenswrapper[4833]: E0219 13:07:25.257558 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="835f7f54-2151-44af-897e-1f21f96b6924" containerName="nova-scheduler-scheduler" Feb 19 13:07:25 crc kubenswrapper[4833]: I0219 13:07:25.935847 4833 generic.go:334] "Generic (PLEG): container finished" podID="835f7f54-2151-44af-897e-1f21f96b6924" containerID="75a06a78f0ed1724c5ed0638c1775e390abd52c30addd63ea09f0e20c5043e49" exitCode=0 Feb 19 13:07:25 crc kubenswrapper[4833]: I0219 13:07:25.935971 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"835f7f54-2151-44af-897e-1f21f96b6924","Type":"ContainerDied","Data":"75a06a78f0ed1724c5ed0638c1775e390abd52c30addd63ea09f0e20c5043e49"} Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.126879 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.169223 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=5.169186406 podStartE2EDuration="5.169186406s" podCreationTimestamp="2026-02-19 13:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:07:23.991923527 +0000 UTC m=+1254.387442305" watchObservedRunningTime="2026-02-19 13:07:26.169186406 +0000 UTC m=+1256.564705214" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.241137 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-combined-ca-bundle\") pod \"835f7f54-2151-44af-897e-1f21f96b6924\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.241202 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8kgl\" (UniqueName: \"kubernetes.io/projected/835f7f54-2151-44af-897e-1f21f96b6924-kube-api-access-g8kgl\") pod \"835f7f54-2151-44af-897e-1f21f96b6924\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.241226 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-config-data\") pod \"835f7f54-2151-44af-897e-1f21f96b6924\" (UID: \"835f7f54-2151-44af-897e-1f21f96b6924\") " Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.247810 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/835f7f54-2151-44af-897e-1f21f96b6924-kube-api-access-g8kgl" (OuterVolumeSpecName: "kube-api-access-g8kgl") pod "835f7f54-2151-44af-897e-1f21f96b6924" (UID: "835f7f54-2151-44af-897e-1f21f96b6924"). InnerVolumeSpecName "kube-api-access-g8kgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.283620 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "835f7f54-2151-44af-897e-1f21f96b6924" (UID: "835f7f54-2151-44af-897e-1f21f96b6924"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.292654 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-config-data" (OuterVolumeSpecName: "config-data") pod "835f7f54-2151-44af-897e-1f21f96b6924" (UID: "835f7f54-2151-44af-897e-1f21f96b6924"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.344193 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.344233 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8kgl\" (UniqueName: \"kubernetes.io/projected/835f7f54-2151-44af-897e-1f21f96b6924-kube-api-access-g8kgl\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.344247 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/835f7f54-2151-44af-897e-1f21f96b6924-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.950200 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"835f7f54-2151-44af-897e-1f21f96b6924","Type":"ContainerDied","Data":"4056a53c2d84cb7e0b3065838bf24b8ffb26353aa6a977ec45403651e9b2e6b1"} Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.950247 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.950706 4833 scope.go:117] "RemoveContainer" containerID="75a06a78f0ed1724c5ed0638c1775e390abd52c30addd63ea09f0e20c5043e49" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.952980 4833 generic.go:334] "Generic (PLEG): container finished" podID="32201a7e-0023-449d-957a-974934357899" containerID="563715ad6f6398546ac35d27869876b072c2473ace9725a1602890cd511f5898" exitCode=0 Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.953043 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32201a7e-0023-449d-957a-974934357899","Type":"ContainerDied","Data":"563715ad6f6398546ac35d27869876b072c2473ace9725a1602890cd511f5898"} Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.953080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32201a7e-0023-449d-957a-974934357899","Type":"ContainerDied","Data":"b521ed24a7d5252fd031fc06e53d40cc78c3cc5e057ecd931bf3bad633755a5b"} Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.953102 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b521ed24a7d5252fd031fc06e53d40cc78c3cc5e057ecd931bf3bad633755a5b" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.978822 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:07:26 crc kubenswrapper[4833]: I0219 13:07:26.996960 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.010569 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.034934 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:07:27 crc kubenswrapper[4833]: E0219 13:07:27.036212 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32201a7e-0023-449d-957a-974934357899" containerName="nova-api-api" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.036725 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="32201a7e-0023-449d-957a-974934357899" containerName="nova-api-api" Feb 19 13:07:27 crc kubenswrapper[4833]: E0219 13:07:27.036921 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32201a7e-0023-449d-957a-974934357899" containerName="nova-api-log" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.037042 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="32201a7e-0023-449d-957a-974934357899" containerName="nova-api-log" Feb 19 13:07:27 crc kubenswrapper[4833]: E0219 13:07:27.037166 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="835f7f54-2151-44af-897e-1f21f96b6924" containerName="nova-scheduler-scheduler" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.037275 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="835f7f54-2151-44af-897e-1f21f96b6924" containerName="nova-scheduler-scheduler" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.037791 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="32201a7e-0023-449d-957a-974934357899" containerName="nova-api-api" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.038046 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="32201a7e-0023-449d-957a-974934357899" containerName="nova-api-log" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.038191 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="835f7f54-2151-44af-897e-1f21f96b6924" containerName="nova-scheduler-scheduler" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.039884 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.041981 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.045161 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.064249 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32201a7e-0023-449d-957a-974934357899-logs\") pod \"32201a7e-0023-449d-957a-974934357899\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.064406 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cjwv\" (UniqueName: \"kubernetes.io/projected/32201a7e-0023-449d-957a-974934357899-kube-api-access-4cjwv\") pod \"32201a7e-0023-449d-957a-974934357899\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.064475 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-config-data\") pod \"32201a7e-0023-449d-957a-974934357899\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.064510 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-combined-ca-bundle\") pod \"32201a7e-0023-449d-957a-974934357899\" (UID: \"32201a7e-0023-449d-957a-974934357899\") " Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.064811 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32201a7e-0023-449d-957a-974934357899-logs" (OuterVolumeSpecName: "logs") pod "32201a7e-0023-449d-957a-974934357899" (UID: "32201a7e-0023-449d-957a-974934357899"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.069240 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32201a7e-0023-449d-957a-974934357899-kube-api-access-4cjwv" (OuterVolumeSpecName: "kube-api-access-4cjwv") pod "32201a7e-0023-449d-957a-974934357899" (UID: "32201a7e-0023-449d-957a-974934357899"). InnerVolumeSpecName "kube-api-access-4cjwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.097794 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-config-data" (OuterVolumeSpecName: "config-data") pod "32201a7e-0023-449d-957a-974934357899" (UID: "32201a7e-0023-449d-957a-974934357899"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.099465 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32201a7e-0023-449d-957a-974934357899" (UID: "32201a7e-0023-449d-957a-974934357899"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.166365 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.166460 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-config-data\") pod \"nova-scheduler-0\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.166508 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czsfr\" (UniqueName: \"kubernetes.io/projected/dd0bc286-f8c6-436b-926a-0aedf7504098-kube-api-access-czsfr\") pod \"nova-scheduler-0\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.166695 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cjwv\" (UniqueName: \"kubernetes.io/projected/32201a7e-0023-449d-957a-974934357899-kube-api-access-4cjwv\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.166715 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.166727 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32201a7e-0023-449d-957a-974934357899-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.166741 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32201a7e-0023-449d-957a-974934357899-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.268181 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czsfr\" (UniqueName: \"kubernetes.io/projected/dd0bc286-f8c6-436b-926a-0aedf7504098-kube-api-access-czsfr\") pod \"nova-scheduler-0\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.268552 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.268695 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-config-data\") pod \"nova-scheduler-0\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.276148 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-config-data\") pod \"nova-scheduler-0\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.276805 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.298250 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czsfr\" (UniqueName: \"kubernetes.io/projected/dd0bc286-f8c6-436b-926a-0aedf7504098-kube-api-access-czsfr\") pod \"nova-scheduler-0\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " pod="openstack/nova-scheduler-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.307366 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.364529 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.850934 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:07:27 crc kubenswrapper[4833]: W0219 13:07:27.863068 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd0bc286_f8c6_436b_926a_0aedf7504098.slice/crio-eff841d256aceefcc7e29ed89d643db1b975438137ea04b6124bbbda742e5ccb WatchSource:0}: Error finding container eff841d256aceefcc7e29ed89d643db1b975438137ea04b6124bbbda742e5ccb: Status 404 returned error can't find the container with id eff841d256aceefcc7e29ed89d643db1b975438137ea04b6124bbbda742e5ccb Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.966143 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dd0bc286-f8c6-436b-926a-0aedf7504098","Type":"ContainerStarted","Data":"eff841d256aceefcc7e29ed89d643db1b975438137ea04b6124bbbda742e5ccb"} Feb 19 13:07:27 crc kubenswrapper[4833]: I0219 13:07:27.966207 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.048728 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.066120 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.075354 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.078590 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.080410 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.086183 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.187810 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.187918 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-config-data\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.187936 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54b1557-980c-46ea-a7a7-994e77448ba7-logs\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.187957 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl68d\" (UniqueName: \"kubernetes.io/projected/f54b1557-980c-46ea-a7a7-994e77448ba7-kube-api-access-rl68d\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.290098 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.290167 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-config-data\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.290195 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54b1557-980c-46ea-a7a7-994e77448ba7-logs\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.290221 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl68d\" (UniqueName: \"kubernetes.io/projected/f54b1557-980c-46ea-a7a7-994e77448ba7-kube-api-access-rl68d\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.290856 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54b1557-980c-46ea-a7a7-994e77448ba7-logs\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.294980 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-config-data\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.295137 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.305198 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl68d\" (UniqueName: \"kubernetes.io/projected/f54b1557-980c-46ea-a7a7-994e77448ba7-kube-api-access-rl68d\") pod \"nova-api-0\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.330245 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32201a7e-0023-449d-957a-974934357899" path="/var/lib/kubelet/pods/32201a7e-0023-449d-957a-974934357899/volumes" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.331138 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="835f7f54-2151-44af-897e-1f21f96b6924" path="/var/lib/kubelet/pods/835f7f54-2151-44af-897e-1f21f96b6924/volumes" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.398601 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.885300 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.979151 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f54b1557-980c-46ea-a7a7-994e77448ba7","Type":"ContainerStarted","Data":"56a2ff208c79e9f3f1b76d3a3349c6d380c2d448ed1b8ed8ab48fdaf0e5645a1"} Feb 19 13:07:28 crc kubenswrapper[4833]: I0219 13:07:28.981253 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dd0bc286-f8c6-436b-926a-0aedf7504098","Type":"ContainerStarted","Data":"2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f"} Feb 19 13:07:29 crc kubenswrapper[4833]: I0219 13:07:29.004332 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.004314612 podStartE2EDuration="3.004314612s" podCreationTimestamp="2026-02-19 13:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:07:28.999303623 +0000 UTC m=+1259.394822441" watchObservedRunningTime="2026-02-19 13:07:29.004314612 +0000 UTC m=+1259.399833380" Feb 19 13:07:29 crc kubenswrapper[4833]: I0219 13:07:29.993243 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f54b1557-980c-46ea-a7a7-994e77448ba7","Type":"ContainerStarted","Data":"7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660"} Feb 19 13:07:29 crc kubenswrapper[4833]: I0219 13:07:29.994464 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f54b1557-980c-46ea-a7a7-994e77448ba7","Type":"ContainerStarted","Data":"39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353"} Feb 19 13:07:30 crc kubenswrapper[4833]: I0219 13:07:30.035397 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.035375158 podStartE2EDuration="2.035375158s" podCreationTimestamp="2026-02-19 13:07:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:07:30.019673284 +0000 UTC m=+1260.415192052" watchObservedRunningTime="2026-02-19 13:07:30.035375158 +0000 UTC m=+1260.430893926" Feb 19 13:07:32 crc kubenswrapper[4833]: I0219 13:07:32.366307 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 13:07:37 crc kubenswrapper[4833]: I0219 13:07:37.364913 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 13:07:37 crc kubenswrapper[4833]: I0219 13:07:37.412833 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 13:07:38 crc kubenswrapper[4833]: I0219 13:07:38.135454 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 13:07:38 crc kubenswrapper[4833]: I0219 13:07:38.399758 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:07:38 crc kubenswrapper[4833]: I0219 13:07:38.399839 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:07:39 crc kubenswrapper[4833]: I0219 13:07:39.481697 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 13:07:39 crc kubenswrapper[4833]: I0219 13:07:39.482037 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 13:07:41 crc kubenswrapper[4833]: I0219 13:07:41.026328 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 13:07:44 crc kubenswrapper[4833]: E0219 13:07:44.998017 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04a482f4_f946_4c01_8f41_e1fbecbc2f53.slice/crio-conmon-83654509cf5a8b8576a81829893689a91f1b0ab33206d9ce48e66c3cd5e29d76.scope\": RecentStats: unable to find data in memory cache]" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.163897 4833 generic.go:334] "Generic (PLEG): container finished" podID="5237bb0f-1ee7-451b-9457-d72c90993794" containerID="18f780012a1ff5f71753c17c35ca61e1791a3e0203bdcd77eae2c76e2c2f5be6" exitCode=137 Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.163989 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5237bb0f-1ee7-451b-9457-d72c90993794","Type":"ContainerDied","Data":"18f780012a1ff5f71753c17c35ca61e1791a3e0203bdcd77eae2c76e2c2f5be6"} Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.165815 4833 generic.go:334] "Generic (PLEG): container finished" podID="04a482f4-f946-4c01-8f41-e1fbecbc2f53" containerID="83654509cf5a8b8576a81829893689a91f1b0ab33206d9ce48e66c3cd5e29d76" exitCode=137 Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.165860 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04a482f4-f946-4c01-8f41-e1fbecbc2f53","Type":"ContainerDied","Data":"83654509cf5a8b8576a81829893689a91f1b0ab33206d9ce48e66c3cd5e29d76"} Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.297574 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.305531 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.456552 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-config-data\") pod \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.456671 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a482f4-f946-4c01-8f41-e1fbecbc2f53-logs\") pod \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.456730 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzrcq\" (UniqueName: \"kubernetes.io/projected/5237bb0f-1ee7-451b-9457-d72c90993794-kube-api-access-dzrcq\") pod \"5237bb0f-1ee7-451b-9457-d72c90993794\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.456773 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-combined-ca-bundle\") pod \"5237bb0f-1ee7-451b-9457-d72c90993794\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.456830 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spmmm\" (UniqueName: \"kubernetes.io/projected/04a482f4-f946-4c01-8f41-e1fbecbc2f53-kube-api-access-spmmm\") pod \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.456862 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-config-data\") pod \"5237bb0f-1ee7-451b-9457-d72c90993794\" (UID: \"5237bb0f-1ee7-451b-9457-d72c90993794\") " Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.456887 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-combined-ca-bundle\") pod \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\" (UID: \"04a482f4-f946-4c01-8f41-e1fbecbc2f53\") " Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.458620 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04a482f4-f946-4c01-8f41-e1fbecbc2f53-logs" (OuterVolumeSpecName: "logs") pod "04a482f4-f946-4c01-8f41-e1fbecbc2f53" (UID: "04a482f4-f946-4c01-8f41-e1fbecbc2f53"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.462964 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5237bb0f-1ee7-451b-9457-d72c90993794-kube-api-access-dzrcq" (OuterVolumeSpecName: "kube-api-access-dzrcq") pod "5237bb0f-1ee7-451b-9457-d72c90993794" (UID: "5237bb0f-1ee7-451b-9457-d72c90993794"). InnerVolumeSpecName "kube-api-access-dzrcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.463556 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a482f4-f946-4c01-8f41-e1fbecbc2f53-kube-api-access-spmmm" (OuterVolumeSpecName: "kube-api-access-spmmm") pod "04a482f4-f946-4c01-8f41-e1fbecbc2f53" (UID: "04a482f4-f946-4c01-8f41-e1fbecbc2f53"). InnerVolumeSpecName "kube-api-access-spmmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.482902 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5237bb0f-1ee7-451b-9457-d72c90993794" (UID: "5237bb0f-1ee7-451b-9457-d72c90993794"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.482961 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04a482f4-f946-4c01-8f41-e1fbecbc2f53" (UID: "04a482f4-f946-4c01-8f41-e1fbecbc2f53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.488444 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-config-data" (OuterVolumeSpecName: "config-data") pod "5237bb0f-1ee7-451b-9457-d72c90993794" (UID: "5237bb0f-1ee7-451b-9457-d72c90993794"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.489223 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-config-data" (OuterVolumeSpecName: "config-data") pod "04a482f4-f946-4c01-8f41-e1fbecbc2f53" (UID: "04a482f4-f946-4c01-8f41-e1fbecbc2f53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.559071 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spmmm\" (UniqueName: \"kubernetes.io/projected/04a482f4-f946-4c01-8f41-e1fbecbc2f53-kube-api-access-spmmm\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.559112 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.559122 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.559130 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04a482f4-f946-4c01-8f41-e1fbecbc2f53-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.559141 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04a482f4-f946-4c01-8f41-e1fbecbc2f53-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.559152 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzrcq\" (UniqueName: \"kubernetes.io/projected/5237bb0f-1ee7-451b-9457-d72c90993794-kube-api-access-dzrcq\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:45 crc kubenswrapper[4833]: I0219 13:07:45.559162 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5237bb0f-1ee7-451b-9457-d72c90993794-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.180163 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"04a482f4-f946-4c01-8f41-e1fbecbc2f53","Type":"ContainerDied","Data":"6910f713f43130c849bd2d4c0f77c71934d918ca84e2191d423c066144f71f09"} Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.180218 4833 scope.go:117] "RemoveContainer" containerID="83654509cf5a8b8576a81829893689a91f1b0ab33206d9ce48e66c3cd5e29d76" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.180250 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.183839 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5237bb0f-1ee7-451b-9457-d72c90993794","Type":"ContainerDied","Data":"d4883cf8be2978fed43a86e74810bccfb3f323a41ed1d9b61c4cd0039aa326c8"} Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.183893 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.229277 4833 scope.go:117] "RemoveContainer" containerID="5c250c600a82fa894d17e179e45465106442605b6a769ce4eb2bf8134b8be7ae" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.240550 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.262400 4833 scope.go:117] "RemoveContainer" containerID="18f780012a1ff5f71753c17c35ca61e1791a3e0203bdcd77eae2c76e2c2f5be6" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.265036 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.278283 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.288045 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.312417 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:07:46 crc kubenswrapper[4833]: E0219 13:07:46.312891 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5237bb0f-1ee7-451b-9457-d72c90993794" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.312913 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5237bb0f-1ee7-451b-9457-d72c90993794" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 13:07:46 crc kubenswrapper[4833]: E0219 13:07:46.312944 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a482f4-f946-4c01-8f41-e1fbecbc2f53" containerName="nova-metadata-metadata" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.312953 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a482f4-f946-4c01-8f41-e1fbecbc2f53" containerName="nova-metadata-metadata" Feb 19 13:07:46 crc kubenswrapper[4833]: E0219 13:07:46.312971 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a482f4-f946-4c01-8f41-e1fbecbc2f53" containerName="nova-metadata-log" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.312979 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a482f4-f946-4c01-8f41-e1fbecbc2f53" containerName="nova-metadata-log" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.313372 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a482f4-f946-4c01-8f41-e1fbecbc2f53" containerName="nova-metadata-log" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.313405 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a482f4-f946-4c01-8f41-e1fbecbc2f53" containerName="nova-metadata-metadata" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.313423 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5237bb0f-1ee7-451b-9457-d72c90993794" containerName="nova-cell1-novncproxy-novncproxy" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.314227 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.316211 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.318002 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.318099 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.340579 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04a482f4-f946-4c01-8f41-e1fbecbc2f53" path="/var/lib/kubelet/pods/04a482f4-f946-4c01-8f41-e1fbecbc2f53/volumes" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.341475 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5237bb0f-1ee7-451b-9457-d72c90993794" path="/var/lib/kubelet/pods/5237bb0f-1ee7-451b-9457-d72c90993794/volumes" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.341965 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.342930 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.344740 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.347064 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.347243 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.350827 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.380661 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.380729 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2bd5\" (UniqueName: \"kubernetes.io/projected/77001968-5717-445a-b12e-a1318c720b23-kube-api-access-s2bd5\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.381077 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.381173 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.381281 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.483670 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.483726 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/433a5df1-d123-4962-b66f-4d4ef7abaa50-logs\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.483778 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.483905 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.483930 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2bd5\" (UniqueName: \"kubernetes.io/projected/77001968-5717-445a-b12e-a1318c720b23-kube-api-access-s2bd5\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.484813 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.484948 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.484999 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-config-data\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.485043 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.485102 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4frgk\" (UniqueName: \"kubernetes.io/projected/433a5df1-d123-4962-b66f-4d4ef7abaa50-kube-api-access-4frgk\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.490463 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.490960 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.491040 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.491102 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/77001968-5717-445a-b12e-a1318c720b23-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.502379 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2bd5\" (UniqueName: \"kubernetes.io/projected/77001968-5717-445a-b12e-a1318c720b23-kube-api-access-s2bd5\") pod \"nova-cell1-novncproxy-0\" (UID: \"77001968-5717-445a-b12e-a1318c720b23\") " pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.586684 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.586923 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.587026 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-config-data\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.587163 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4frgk\" (UniqueName: \"kubernetes.io/projected/433a5df1-d123-4962-b66f-4d4ef7abaa50-kube-api-access-4frgk\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.587199 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/433a5df1-d123-4962-b66f-4d4ef7abaa50-logs\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.590865 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/433a5df1-d123-4962-b66f-4d4ef7abaa50-logs\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.592689 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.592918 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.596412 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-config-data\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.624720 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4frgk\" (UniqueName: \"kubernetes.io/projected/433a5df1-d123-4962-b66f-4d4ef7abaa50-kube-api-access-4frgk\") pod \"nova-metadata-0\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " pod="openstack/nova-metadata-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.635620 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:46 crc kubenswrapper[4833]: I0219 13:07:46.660858 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:07:47 crc kubenswrapper[4833]: I0219 13:07:47.226046 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 19 13:07:47 crc kubenswrapper[4833]: W0219 13:07:47.257337 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod433a5df1_d123_4962_b66f_4d4ef7abaa50.slice/crio-56b7640a5772ee9a8290525069bf66ca92f3776dc7fe6e81145efd751d882d5f WatchSource:0}: Error finding container 56b7640a5772ee9a8290525069bf66ca92f3776dc7fe6e81145efd751d882d5f: Status 404 returned error can't find the container with id 56b7640a5772ee9a8290525069bf66ca92f3776dc7fe6e81145efd751d882d5f Feb 19 13:07:47 crc kubenswrapper[4833]: I0219 13:07:47.272142 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.209051 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"433a5df1-d123-4962-b66f-4d4ef7abaa50","Type":"ContainerStarted","Data":"9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13"} Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.209426 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"433a5df1-d123-4962-b66f-4d4ef7abaa50","Type":"ContainerStarted","Data":"dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f"} Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.209443 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"433a5df1-d123-4962-b66f-4d4ef7abaa50","Type":"ContainerStarted","Data":"56b7640a5772ee9a8290525069bf66ca92f3776dc7fe6e81145efd751d882d5f"} Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.213224 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"77001968-5717-445a-b12e-a1318c720b23","Type":"ContainerStarted","Data":"2d2892bdb9f980de585f0da4b8ad2bdff840efe34eb8730d16496a58832995c4"} Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.213299 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"77001968-5717-445a-b12e-a1318c720b23","Type":"ContainerStarted","Data":"c36f829f4ad81b967be809ca24ce039cace326483eb9b3dd0aed5758469c0f4c"} Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.242263 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.24223848 podStartE2EDuration="2.24223848s" podCreationTimestamp="2026-02-19 13:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:07:48.233588461 +0000 UTC m=+1278.629107229" watchObservedRunningTime="2026-02-19 13:07:48.24223848 +0000 UTC m=+1278.637757268" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.261726 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.261701727 podStartE2EDuration="2.261701727s" podCreationTimestamp="2026-02-19 13:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:07:48.255767044 +0000 UTC m=+1278.651285822" watchObservedRunningTime="2026-02-19 13:07:48.261701727 +0000 UTC m=+1278.657220495" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.515862 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.516251 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.516547 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.516717 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.518685 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.525952 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.716474 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-xtgk8"] Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.718721 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.726886 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-xtgk8"] Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.833675 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.833817 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.833874 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.833904 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qb88\" (UniqueName: \"kubernetes.io/projected/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-kube-api-access-6qb88\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.833924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.833947 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-config\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.935695 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.935827 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.935879 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.935902 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.935924 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qb88\" (UniqueName: \"kubernetes.io/projected/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-kube-api-access-6qb88\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.935946 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-config\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.936886 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-config\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.937456 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.938055 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.938681 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.939400 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:48 crc kubenswrapper[4833]: I0219 13:07:48.955998 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qb88\" (UniqueName: \"kubernetes.io/projected/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-kube-api-access-6qb88\") pod \"dnsmasq-dns-cd5cbd7b9-xtgk8\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:49 crc kubenswrapper[4833]: I0219 13:07:49.066755 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:49 crc kubenswrapper[4833]: I0219 13:07:49.566414 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-xtgk8"] Feb 19 13:07:50 crc kubenswrapper[4833]: I0219 13:07:50.233283 4833 generic.go:334] "Generic (PLEG): container finished" podID="9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" containerID="29e197fb9dffbbe5d6911419f369a9b1ffdf726132da1d32f47aa208c9174267" exitCode=0 Feb 19 13:07:50 crc kubenswrapper[4833]: I0219 13:07:50.233376 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" event={"ID":"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20","Type":"ContainerDied","Data":"29e197fb9dffbbe5d6911419f369a9b1ffdf726132da1d32f47aa208c9174267"} Feb 19 13:07:50 crc kubenswrapper[4833]: I0219 13:07:50.233745 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" event={"ID":"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20","Type":"ContainerStarted","Data":"c338f8595d4a3d5f8237769c60b5ee393ce2f0ca8f90ce2155bd83c9498b9807"} Feb 19 13:07:50 crc kubenswrapper[4833]: I0219 13:07:50.967766 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.083571 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.084326 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="sg-core" containerID="cri-o://c2af436d770e2002ae73f88ea66fed8d69627077d9df14bc1d3e219e462beff7" gracePeriod=30 Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.084416 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="proxy-httpd" containerID="cri-o://74388ebf2fed1ae8fe55614e555d106c2e812c8c1e65c439bfd6a79cc3539c15" gracePeriod=30 Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.084733 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="ceilometer-notification-agent" containerID="cri-o://eb948ff3c7dcc71ef558a3c1afb6237b02ca8703270af3fbde4bc0715d8181c7" gracePeriod=30 Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.084847 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="ceilometer-central-agent" containerID="cri-o://4c815edfdeac571082a304d3654501a286d20accf557521f71996c7e5f169fe6" gracePeriod=30 Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.245908 4833 generic.go:334] "Generic (PLEG): container finished" podID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerID="74388ebf2fed1ae8fe55614e555d106c2e812c8c1e65c439bfd6a79cc3539c15" exitCode=0 Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.245953 4833 generic.go:334] "Generic (PLEG): container finished" podID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerID="c2af436d770e2002ae73f88ea66fed8d69627077d9df14bc1d3e219e462beff7" exitCode=2 Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.245962 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a9f14-787f-4834-a8e0-f4b55638492d","Type":"ContainerDied","Data":"74388ebf2fed1ae8fe55614e555d106c2e812c8c1e65c439bfd6a79cc3539c15"} Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.246003 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a9f14-787f-4834-a8e0-f4b55638492d","Type":"ContainerDied","Data":"c2af436d770e2002ae73f88ea66fed8d69627077d9df14bc1d3e219e462beff7"} Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.247843 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerName="nova-api-log" containerID="cri-o://39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353" gracePeriod=30 Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.248030 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" event={"ID":"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20","Type":"ContainerStarted","Data":"35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed"} Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.248156 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerName="nova-api-api" containerID="cri-o://7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660" gracePeriod=30 Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.248238 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.298535 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" podStartSLOduration=3.298484728 podStartE2EDuration="3.298484728s" podCreationTimestamp="2026-02-19 13:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:07:51.269799406 +0000 UTC m=+1281.665318204" watchObservedRunningTime="2026-02-19 13:07:51.298484728 +0000 UTC m=+1281.694003516" Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.636015 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.661764 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:07:51 crc kubenswrapper[4833]: I0219 13:07:51.661819 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:07:52 crc kubenswrapper[4833]: I0219 13:07:52.277476 4833 generic.go:334] "Generic (PLEG): container finished" podID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerID="39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353" exitCode=143 Feb 19 13:07:52 crc kubenswrapper[4833]: I0219 13:07:52.277564 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f54b1557-980c-46ea-a7a7-994e77448ba7","Type":"ContainerDied","Data":"39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353"} Feb 19 13:07:52 crc kubenswrapper[4833]: I0219 13:07:52.284764 4833 generic.go:334] "Generic (PLEG): container finished" podID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerID="4c815edfdeac571082a304d3654501a286d20accf557521f71996c7e5f169fe6" exitCode=0 Feb 19 13:07:52 crc kubenswrapper[4833]: I0219 13:07:52.284884 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a9f14-787f-4834-a8e0-f4b55638492d","Type":"ContainerDied","Data":"4c815edfdeac571082a304d3654501a286d20accf557521f71996c7e5f169fe6"} Feb 19 13:07:54 crc kubenswrapper[4833]: I0219 13:07:54.884002 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:07:54 crc kubenswrapper[4833]: I0219 13:07:54.973446 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl68d\" (UniqueName: \"kubernetes.io/projected/f54b1557-980c-46ea-a7a7-994e77448ba7-kube-api-access-rl68d\") pod \"f54b1557-980c-46ea-a7a7-994e77448ba7\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " Feb 19 13:07:54 crc kubenswrapper[4833]: I0219 13:07:54.973992 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54b1557-980c-46ea-a7a7-994e77448ba7-logs\") pod \"f54b1557-980c-46ea-a7a7-994e77448ba7\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " Feb 19 13:07:54 crc kubenswrapper[4833]: I0219 13:07:54.974154 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-config-data\") pod \"f54b1557-980c-46ea-a7a7-994e77448ba7\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " Feb 19 13:07:54 crc kubenswrapper[4833]: I0219 13:07:54.974341 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-combined-ca-bundle\") pod \"f54b1557-980c-46ea-a7a7-994e77448ba7\" (UID: \"f54b1557-980c-46ea-a7a7-994e77448ba7\") " Feb 19 13:07:54 crc kubenswrapper[4833]: I0219 13:07:54.974474 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54b1557-980c-46ea-a7a7-994e77448ba7-logs" (OuterVolumeSpecName: "logs") pod "f54b1557-980c-46ea-a7a7-994e77448ba7" (UID: "f54b1557-980c-46ea-a7a7-994e77448ba7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:07:54 crc kubenswrapper[4833]: I0219 13:07:54.974999 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54b1557-980c-46ea-a7a7-994e77448ba7-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:54 crc kubenswrapper[4833]: I0219 13:07:54.982361 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54b1557-980c-46ea-a7a7-994e77448ba7-kube-api-access-rl68d" (OuterVolumeSpecName: "kube-api-access-rl68d") pod "f54b1557-980c-46ea-a7a7-994e77448ba7" (UID: "f54b1557-980c-46ea-a7a7-994e77448ba7"). InnerVolumeSpecName "kube-api-access-rl68d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.004382 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f54b1557-980c-46ea-a7a7-994e77448ba7" (UID: "f54b1557-980c-46ea-a7a7-994e77448ba7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.004566 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-config-data" (OuterVolumeSpecName: "config-data") pod "f54b1557-980c-46ea-a7a7-994e77448ba7" (UID: "f54b1557-980c-46ea-a7a7-994e77448ba7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.076219 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.076247 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl68d\" (UniqueName: \"kubernetes.io/projected/f54b1557-980c-46ea-a7a7-994e77448ba7-kube-api-access-rl68d\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.076259 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54b1557-980c-46ea-a7a7-994e77448ba7-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.310399 4833 generic.go:334] "Generic (PLEG): container finished" podID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerID="7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660" exitCode=0 Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.310444 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f54b1557-980c-46ea-a7a7-994e77448ba7","Type":"ContainerDied","Data":"7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660"} Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.310517 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f54b1557-980c-46ea-a7a7-994e77448ba7","Type":"ContainerDied","Data":"56a2ff208c79e9f3f1b76d3a3349c6d380c2d448ed1b8ed8ab48fdaf0e5645a1"} Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.310530 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.310540 4833 scope.go:117] "RemoveContainer" containerID="7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.332768 4833 scope.go:117] "RemoveContainer" containerID="39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.349991 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.360325 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.362673 4833 scope.go:117] "RemoveContainer" containerID="7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660" Feb 19 13:07:55 crc kubenswrapper[4833]: E0219 13:07:55.365314 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660\": container with ID starting with 7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660 not found: ID does not exist" containerID="7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.365356 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660"} err="failed to get container status \"7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660\": rpc error: code = NotFound desc = could not find container \"7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660\": container with ID starting with 7bcac7d7604c2f42e0de7d137b226f8a24e3515ce5e1ee776d046a18acae8660 not found: ID does not exist" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.365386 4833 scope.go:117] "RemoveContainer" containerID="39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353" Feb 19 13:07:55 crc kubenswrapper[4833]: E0219 13:07:55.367098 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353\": container with ID starting with 39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353 not found: ID does not exist" containerID="39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.367120 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353"} err="failed to get container status \"39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353\": rpc error: code = NotFound desc = could not find container \"39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353\": container with ID starting with 39a6c4a751478c3088bb3a75e728c505b51d163df4587321790fa9603bf85353 not found: ID does not exist" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.369630 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:55 crc kubenswrapper[4833]: E0219 13:07:55.370837 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerName="nova-api-log" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.370855 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerName="nova-api-log" Feb 19 13:07:55 crc kubenswrapper[4833]: E0219 13:07:55.370866 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerName="nova-api-api" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.370872 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerName="nova-api-api" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.371055 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerName="nova-api-log" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.371072 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54b1557-980c-46ea-a7a7-994e77448ba7" containerName="nova-api-api" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.371959 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.374675 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.376031 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.378620 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.384681 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.481331 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-internal-tls-certs\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.481403 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.481461 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-public-tls-certs\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.481512 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-config-data\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.481534 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bmmg\" (UniqueName: \"kubernetes.io/projected/765c6d2d-c493-4ade-80cc-27121aad8038-kube-api-access-8bmmg\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.481737 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/765c6d2d-c493-4ade-80cc-27121aad8038-logs\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.583740 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-public-tls-certs\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.583800 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-config-data\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.583827 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bmmg\" (UniqueName: \"kubernetes.io/projected/765c6d2d-c493-4ade-80cc-27121aad8038-kube-api-access-8bmmg\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.583930 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/765c6d2d-c493-4ade-80cc-27121aad8038-logs\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.583947 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-internal-tls-certs\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.583982 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.584635 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/765c6d2d-c493-4ade-80cc-27121aad8038-logs\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.589075 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-public-tls-certs\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.589907 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.590130 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-config-data\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.592180 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-internal-tls-certs\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.602298 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bmmg\" (UniqueName: \"kubernetes.io/projected/765c6d2d-c493-4ade-80cc-27121aad8038-kube-api-access-8bmmg\") pod \"nova-api-0\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " pod="openstack/nova-api-0" Feb 19 13:07:55 crc kubenswrapper[4833]: I0219 13:07:55.727978 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:07:56 crc kubenswrapper[4833]: I0219 13:07:56.206849 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:07:56 crc kubenswrapper[4833]: I0219 13:07:56.325530 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f54b1557-980c-46ea-a7a7-994e77448ba7" path="/var/lib/kubelet/pods/f54b1557-980c-46ea-a7a7-994e77448ba7/volumes" Feb 19 13:07:56 crc kubenswrapper[4833]: I0219 13:07:56.326458 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"765c6d2d-c493-4ade-80cc-27121aad8038","Type":"ContainerStarted","Data":"6953ac2001ea81127f01c57dcbcccef389d46cdf84bb2f6f05bbe3f5774d19d6"} Feb 19 13:07:56 crc kubenswrapper[4833]: I0219 13:07:56.636641 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:56 crc kubenswrapper[4833]: I0219 13:07:56.660406 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:56 crc kubenswrapper[4833]: I0219 13:07:56.661027 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 13:07:56 crc kubenswrapper[4833]: I0219 13:07:56.661083 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.342657 4833 generic.go:334] "Generic (PLEG): container finished" podID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerID="eb948ff3c7dcc71ef558a3c1afb6237b02ca8703270af3fbde4bc0715d8181c7" exitCode=0 Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.342928 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a9f14-787f-4834-a8e0-f4b55638492d","Type":"ContainerDied","Data":"eb948ff3c7dcc71ef558a3c1afb6237b02ca8703270af3fbde4bc0715d8181c7"} Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.346396 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"765c6d2d-c493-4ade-80cc-27121aad8038","Type":"ContainerStarted","Data":"714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34"} Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.346423 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"765c6d2d-c493-4ade-80cc-27121aad8038","Type":"ContainerStarted","Data":"45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667"} Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.367186 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.404445 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.404423192 podStartE2EDuration="2.404423192s" podCreationTimestamp="2026-02-19 13:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:07:57.367467132 +0000 UTC m=+1287.762985910" watchObservedRunningTime="2026-02-19 13:07:57.404423192 +0000 UTC m=+1287.799941980" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.530671 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.551034 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-b7bqt"] Feb 19 13:07:57 crc kubenswrapper[4833]: E0219 13:07:57.551438 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="proxy-httpd" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.551456 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="proxy-httpd" Feb 19 13:07:57 crc kubenswrapper[4833]: E0219 13:07:57.551471 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="sg-core" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.551478 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="sg-core" Feb 19 13:07:57 crc kubenswrapper[4833]: E0219 13:07:57.551505 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="ceilometer-central-agent" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.551513 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="ceilometer-central-agent" Feb 19 13:07:57 crc kubenswrapper[4833]: E0219 13:07:57.551526 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="ceilometer-notification-agent" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.551532 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="ceilometer-notification-agent" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.551720 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="proxy-httpd" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.551734 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="sg-core" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.551752 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="ceilometer-notification-agent" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.551767 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" containerName="ceilometer-central-agent" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.552347 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.561845 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.583706 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-b7bqt"] Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.604837 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.677637 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.677716 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.730628 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-config-data\") pod \"ab2a9f14-787f-4834-a8e0-f4b55638492d\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.731059 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-combined-ca-bundle\") pod \"ab2a9f14-787f-4834-a8e0-f4b55638492d\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.731091 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-scripts\") pod \"ab2a9f14-787f-4834-a8e0-f4b55638492d\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.731137 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-ceilometer-tls-certs\") pod \"ab2a9f14-787f-4834-a8e0-f4b55638492d\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.731174 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-run-httpd\") pod \"ab2a9f14-787f-4834-a8e0-f4b55638492d\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.731288 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8hzk\" (UniqueName: \"kubernetes.io/projected/ab2a9f14-787f-4834-a8e0-f4b55638492d-kube-api-access-z8hzk\") pod \"ab2a9f14-787f-4834-a8e0-f4b55638492d\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.731318 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-sg-core-conf-yaml\") pod \"ab2a9f14-787f-4834-a8e0-f4b55638492d\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.731425 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-log-httpd\") pod \"ab2a9f14-787f-4834-a8e0-f4b55638492d\" (UID: \"ab2a9f14-787f-4834-a8e0-f4b55638492d\") " Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.731766 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-config-data\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.731833 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljdn\" (UniqueName: \"kubernetes.io/projected/5b18380f-3674-4e04-a34d-bf81ba3c58c8-kube-api-access-dljdn\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.731968 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-scripts\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.732019 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.740782 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ab2a9f14-787f-4834-a8e0-f4b55638492d" (UID: "ab2a9f14-787f-4834-a8e0-f4b55638492d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.741889 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ab2a9f14-787f-4834-a8e0-f4b55638492d" (UID: "ab2a9f14-787f-4834-a8e0-f4b55638492d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.754469 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-scripts" (OuterVolumeSpecName: "scripts") pod "ab2a9f14-787f-4834-a8e0-f4b55638492d" (UID: "ab2a9f14-787f-4834-a8e0-f4b55638492d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.756646 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2a9f14-787f-4834-a8e0-f4b55638492d-kube-api-access-z8hzk" (OuterVolumeSpecName: "kube-api-access-z8hzk") pod "ab2a9f14-787f-4834-a8e0-f4b55638492d" (UID: "ab2a9f14-787f-4834-a8e0-f4b55638492d"). InnerVolumeSpecName "kube-api-access-z8hzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.834571 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-config-data\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.834857 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dljdn\" (UniqueName: \"kubernetes.io/projected/5b18380f-3674-4e04-a34d-bf81ba3c58c8-kube-api-access-dljdn\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.835012 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-scripts\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.835114 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.835253 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8hzk\" (UniqueName: \"kubernetes.io/projected/ab2a9f14-787f-4834-a8e0-f4b55638492d-kube-api-access-z8hzk\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.835317 4833 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.835380 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.835440 4833 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab2a9f14-787f-4834-a8e0-f4b55638492d-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.843871 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.847678 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-scripts\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.849224 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-config-data\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.853738 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ab2a9f14-787f-4834-a8e0-f4b55638492d" (UID: "ab2a9f14-787f-4834-a8e0-f4b55638492d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.857422 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ab2a9f14-787f-4834-a8e0-f4b55638492d" (UID: "ab2a9f14-787f-4834-a8e0-f4b55638492d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.859922 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dljdn\" (UniqueName: \"kubernetes.io/projected/5b18380f-3674-4e04-a34d-bf81ba3c58c8-kube-api-access-dljdn\") pod \"nova-cell1-cell-mapping-b7bqt\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.883667 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab2a9f14-787f-4834-a8e0-f4b55638492d" (UID: "ab2a9f14-787f-4834-a8e0-f4b55638492d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.920997 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.931598 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-config-data" (OuterVolumeSpecName: "config-data") pod "ab2a9f14-787f-4834-a8e0-f4b55638492d" (UID: "ab2a9f14-787f-4834-a8e0-f4b55638492d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.937444 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.937469 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.937478 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:57 crc kubenswrapper[4833]: I0219 13:07:57.937488 4833 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab2a9f14-787f-4834-a8e0-f4b55638492d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:58 crc kubenswrapper[4833]: W0219 13:07:58.372415 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b18380f_3674_4e04_a34d_bf81ba3c58c8.slice/crio-fec90e4bcaa0e1d54428679d723ed1ade017a910302db789a067511c15cd46f3 WatchSource:0}: Error finding container fec90e4bcaa0e1d54428679d723ed1ade017a910302db789a067511c15cd46f3: Status 404 returned error can't find the container with id fec90e4bcaa0e1d54428679d723ed1ade017a910302db789a067511c15cd46f3 Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.381341 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-b7bqt"] Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.381709 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab2a9f14-787f-4834-a8e0-f4b55638492d","Type":"ContainerDied","Data":"339cbdf4cd7c2c36f9a40bed6d0149c0ade08d8b1ea8d874827a170519faa3e6"} Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.381804 4833 scope.go:117] "RemoveContainer" containerID="74388ebf2fed1ae8fe55614e555d106c2e812c8c1e65c439bfd6a79cc3539c15" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.382084 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.410565 4833 scope.go:117] "RemoveContainer" containerID="c2af436d770e2002ae73f88ea66fed8d69627077d9df14bc1d3e219e462beff7" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.431231 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.446804 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.455428 4833 scope.go:117] "RemoveContainer" containerID="eb948ff3c7dcc71ef558a3c1afb6237b02ca8703270af3fbde4bc0715d8181c7" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.470694 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.473091 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.477150 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.477664 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.477905 4833 scope.go:117] "RemoveContainer" containerID="4c815edfdeac571082a304d3654501a286d20accf557521f71996c7e5f169fe6" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.478125 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.479965 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.610790 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgbtl\" (UniqueName: \"kubernetes.io/projected/8d9232be-2376-4ec9-9f32-16de9f8942d0-kube-api-access-cgbtl\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.611240 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9232be-2376-4ec9-9f32-16de9f8942d0-log-httpd\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.611288 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.611313 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.611344 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-config-data\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.611486 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.611687 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-scripts\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.611719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9232be-2376-4ec9-9f32-16de9f8942d0-run-httpd\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.713694 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgbtl\" (UniqueName: \"kubernetes.io/projected/8d9232be-2376-4ec9-9f32-16de9f8942d0-kube-api-access-cgbtl\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.713830 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9232be-2376-4ec9-9f32-16de9f8942d0-log-httpd\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.713889 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.713920 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.713964 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-config-data\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.714029 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.714116 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-scripts\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.714161 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9232be-2376-4ec9-9f32-16de9f8942d0-run-httpd\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.715011 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9232be-2376-4ec9-9f32-16de9f8942d0-run-httpd\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.716067 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d9232be-2376-4ec9-9f32-16de9f8942d0-log-httpd\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.722352 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-scripts\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.722589 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.725109 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.726258 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-config-data\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.729185 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d9232be-2376-4ec9-9f32-16de9f8942d0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.735813 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgbtl\" (UniqueName: \"kubernetes.io/projected/8d9232be-2376-4ec9-9f32-16de9f8942d0-kube-api-access-cgbtl\") pod \"ceilometer-0\" (UID: \"8d9232be-2376-4ec9-9f32-16de9f8942d0\") " pod="openstack/ceilometer-0" Feb 19 13:07:58 crc kubenswrapper[4833]: I0219 13:07:58.808342 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.068704 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.140311 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-tmzpd"] Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.140568 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" podUID="1c84a4ef-18de-46c4-badf-8feafe986252" containerName="dnsmasq-dns" containerID="cri-o://b5946dd025acb1fa3f7ac2c615490f0f33a9fd0750c6b0d31837c3b5a28ed6de" gracePeriod=10 Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.345947 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.414828 4833 generic.go:334] "Generic (PLEG): container finished" podID="1c84a4ef-18de-46c4-badf-8feafe986252" containerID="b5946dd025acb1fa3f7ac2c615490f0f33a9fd0750c6b0d31837c3b5a28ed6de" exitCode=0 Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.415066 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" event={"ID":"1c84a4ef-18de-46c4-badf-8feafe986252","Type":"ContainerDied","Data":"b5946dd025acb1fa3f7ac2c615490f0f33a9fd0750c6b0d31837c3b5a28ed6de"} Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.418088 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b7bqt" event={"ID":"5b18380f-3674-4e04-a34d-bf81ba3c58c8","Type":"ContainerStarted","Data":"0f9cbd4f0184c941dcf22b49fa070819df69445d7a107c5005dfce814fd7212f"} Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.418145 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b7bqt" event={"ID":"5b18380f-3674-4e04-a34d-bf81ba3c58c8","Type":"ContainerStarted","Data":"fec90e4bcaa0e1d54428679d723ed1ade017a910302db789a067511c15cd46f3"} Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.427306 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9232be-2376-4ec9-9f32-16de9f8942d0","Type":"ContainerStarted","Data":"81a940080c585b1d0a014d411b7d21c197e2d4bab61381993d987faa4f66663b"} Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.443219 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-b7bqt" podStartSLOduration=2.443102456 podStartE2EDuration="2.443102456s" podCreationTimestamp="2026-02-19 13:07:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:07:59.436569616 +0000 UTC m=+1289.832088384" watchObservedRunningTime="2026-02-19 13:07:59.443102456 +0000 UTC m=+1289.838621224" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.600460 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.635568 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-swift-storage-0\") pod \"1c84a4ef-18de-46c4-badf-8feafe986252\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.635699 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-config\") pod \"1c84a4ef-18de-46c4-badf-8feafe986252\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.635806 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkfmt\" (UniqueName: \"kubernetes.io/projected/1c84a4ef-18de-46c4-badf-8feafe986252-kube-api-access-kkfmt\") pod \"1c84a4ef-18de-46c4-badf-8feafe986252\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.635850 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-svc\") pod \"1c84a4ef-18de-46c4-badf-8feafe986252\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.635902 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-nb\") pod \"1c84a4ef-18de-46c4-badf-8feafe986252\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.635927 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-sb\") pod \"1c84a4ef-18de-46c4-badf-8feafe986252\" (UID: \"1c84a4ef-18de-46c4-badf-8feafe986252\") " Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.644229 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c84a4ef-18de-46c4-badf-8feafe986252-kube-api-access-kkfmt" (OuterVolumeSpecName: "kube-api-access-kkfmt") pod "1c84a4ef-18de-46c4-badf-8feafe986252" (UID: "1c84a4ef-18de-46c4-badf-8feafe986252"). InnerVolumeSpecName "kube-api-access-kkfmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.692003 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1c84a4ef-18de-46c4-badf-8feafe986252" (UID: "1c84a4ef-18de-46c4-badf-8feafe986252"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.693801 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1c84a4ef-18de-46c4-badf-8feafe986252" (UID: "1c84a4ef-18de-46c4-badf-8feafe986252"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.694177 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1c84a4ef-18de-46c4-badf-8feafe986252" (UID: "1c84a4ef-18de-46c4-badf-8feafe986252"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.699070 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1c84a4ef-18de-46c4-badf-8feafe986252" (UID: "1c84a4ef-18de-46c4-badf-8feafe986252"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.705756 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-config" (OuterVolumeSpecName: "config") pod "1c84a4ef-18de-46c4-badf-8feafe986252" (UID: "1c84a4ef-18de-46c4-badf-8feafe986252"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.738417 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.738456 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkfmt\" (UniqueName: \"kubernetes.io/projected/1c84a4ef-18de-46c4-badf-8feafe986252-kube-api-access-kkfmt\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.738470 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.738483 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.738513 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:07:59 crc kubenswrapper[4833]: I0219 13:07:59.738523 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1c84a4ef-18de-46c4-badf-8feafe986252-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:00 crc kubenswrapper[4833]: I0219 13:08:00.330809 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2a9f14-787f-4834-a8e0-f4b55638492d" path="/var/lib/kubelet/pods/ab2a9f14-787f-4834-a8e0-f4b55638492d/volumes" Feb 19 13:08:00 crc kubenswrapper[4833]: I0219 13:08:00.440098 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9232be-2376-4ec9-9f32-16de9f8942d0","Type":"ContainerStarted","Data":"a65bc44dc60fcae8e304499e6e0d5693159a960abdba6b32e0985f2daf1ba6ef"} Feb 19 13:08:00 crc kubenswrapper[4833]: I0219 13:08:00.446026 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" Feb 19 13:08:00 crc kubenswrapper[4833]: I0219 13:08:00.446019 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-tmzpd" event={"ID":"1c84a4ef-18de-46c4-badf-8feafe986252","Type":"ContainerDied","Data":"be85ae0c0c49c6bc75171bd01f037978dd08960bea1a55492c2b88736fc9c98b"} Feb 19 13:08:00 crc kubenswrapper[4833]: I0219 13:08:00.446203 4833 scope.go:117] "RemoveContainer" containerID="b5946dd025acb1fa3f7ac2c615490f0f33a9fd0750c6b0d31837c3b5a28ed6de" Feb 19 13:08:00 crc kubenswrapper[4833]: I0219 13:08:00.467465 4833 scope.go:117] "RemoveContainer" containerID="085be12acf38eb8a2e5a6c1648d66d61ece278b10a676ec46c5b0a80078722f8" Feb 19 13:08:00 crc kubenswrapper[4833]: I0219 13:08:00.483800 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-tmzpd"] Feb 19 13:08:00 crc kubenswrapper[4833]: I0219 13:08:00.520541 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-tmzpd"] Feb 19 13:08:01 crc kubenswrapper[4833]: I0219 13:08:01.476577 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9232be-2376-4ec9-9f32-16de9f8942d0","Type":"ContainerStarted","Data":"3133cda9e667405ec451697fc9e49eebbab924bd7b6afd5c2478472cfcb7b7ce"} Feb 19 13:08:02 crc kubenswrapper[4833]: I0219 13:08:02.326780 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c84a4ef-18de-46c4-badf-8feafe986252" path="/var/lib/kubelet/pods/1c84a4ef-18de-46c4-badf-8feafe986252/volumes" Feb 19 13:08:02 crc kubenswrapper[4833]: I0219 13:08:02.491280 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9232be-2376-4ec9-9f32-16de9f8942d0","Type":"ContainerStarted","Data":"fb28d6d041f4c878413a8d7ef5bc0620aa6d3a92e7f3baeffea110a2d3b4f083"} Feb 19 13:08:03 crc kubenswrapper[4833]: I0219 13:08:03.504231 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d9232be-2376-4ec9-9f32-16de9f8942d0","Type":"ContainerStarted","Data":"c83da0868b6eee921f9ed24b03d26f44728d80a17718f4a60783e165348384f7"} Feb 19 13:08:03 crc kubenswrapper[4833]: I0219 13:08:03.504642 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 19 13:08:03 crc kubenswrapper[4833]: I0219 13:08:03.507010 4833 generic.go:334] "Generic (PLEG): container finished" podID="5b18380f-3674-4e04-a34d-bf81ba3c58c8" containerID="0f9cbd4f0184c941dcf22b49fa070819df69445d7a107c5005dfce814fd7212f" exitCode=0 Feb 19 13:08:03 crc kubenswrapper[4833]: I0219 13:08:03.507143 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b7bqt" event={"ID":"5b18380f-3674-4e04-a34d-bf81ba3c58c8","Type":"ContainerDied","Data":"0f9cbd4f0184c941dcf22b49fa070819df69445d7a107c5005dfce814fd7212f"} Feb 19 13:08:03 crc kubenswrapper[4833]: I0219 13:08:03.535826 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.74726399 podStartE2EDuration="5.535805408s" podCreationTimestamp="2026-02-19 13:07:58 +0000 UTC" firstStartedPulling="2026-02-19 13:07:59.35629211 +0000 UTC m=+1289.751810888" lastFinishedPulling="2026-02-19 13:08:03.144833508 +0000 UTC m=+1293.540352306" observedRunningTime="2026-02-19 13:08:03.527691715 +0000 UTC m=+1293.923210503" watchObservedRunningTime="2026-02-19 13:08:03.535805408 +0000 UTC m=+1293.931324176" Feb 19 13:08:04 crc kubenswrapper[4833]: I0219 13:08:04.923268 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.059975 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-scripts\") pod \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.060021 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-combined-ca-bundle\") pod \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.060221 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-config-data\") pod \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.060294 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dljdn\" (UniqueName: \"kubernetes.io/projected/5b18380f-3674-4e04-a34d-bf81ba3c58c8-kube-api-access-dljdn\") pod \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\" (UID: \"5b18380f-3674-4e04-a34d-bf81ba3c58c8\") " Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.066798 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b18380f-3674-4e04-a34d-bf81ba3c58c8-kube-api-access-dljdn" (OuterVolumeSpecName: "kube-api-access-dljdn") pod "5b18380f-3674-4e04-a34d-bf81ba3c58c8" (UID: "5b18380f-3674-4e04-a34d-bf81ba3c58c8"). InnerVolumeSpecName "kube-api-access-dljdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.067579 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-scripts" (OuterVolumeSpecName: "scripts") pod "5b18380f-3674-4e04-a34d-bf81ba3c58c8" (UID: "5b18380f-3674-4e04-a34d-bf81ba3c58c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.087466 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-config-data" (OuterVolumeSpecName: "config-data") pod "5b18380f-3674-4e04-a34d-bf81ba3c58c8" (UID: "5b18380f-3674-4e04-a34d-bf81ba3c58c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.093080 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b18380f-3674-4e04-a34d-bf81ba3c58c8" (UID: "5b18380f-3674-4e04-a34d-bf81ba3c58c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.162881 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.163194 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dljdn\" (UniqueName: \"kubernetes.io/projected/5b18380f-3674-4e04-a34d-bf81ba3c58c8-kube-api-access-dljdn\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.163364 4833 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-scripts\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.163536 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b18380f-3674-4e04-a34d-bf81ba3c58c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.528837 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-b7bqt" event={"ID":"5b18380f-3674-4e04-a34d-bf81ba3c58c8","Type":"ContainerDied","Data":"fec90e4bcaa0e1d54428679d723ed1ade017a910302db789a067511c15cd46f3"} Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.529223 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fec90e4bcaa0e1d54428679d723ed1ade017a910302db789a067511c15cd46f3" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.529129 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-b7bqt" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.728965 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.729563 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.807211 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.807874 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerName="nova-metadata-metadata" containerID="cri-o://9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13" gracePeriod=30 Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.810396 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerName="nova-metadata-log" containerID="cri-o://dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f" gracePeriod=30 Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.826383 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.826667 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="dd0bc286-f8c6-436b-926a-0aedf7504098" containerName="nova-scheduler-scheduler" containerID="cri-o://2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f" gracePeriod=30 Feb 19 13:08:05 crc kubenswrapper[4833]: I0219 13:08:05.837522 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:08:06 crc kubenswrapper[4833]: I0219 13:08:06.546853 4833 generic.go:334] "Generic (PLEG): container finished" podID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerID="dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f" exitCode=143 Feb 19 13:08:06 crc kubenswrapper[4833]: I0219 13:08:06.546945 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"433a5df1-d123-4962-b66f-4d4ef7abaa50","Type":"ContainerDied","Data":"dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f"} Feb 19 13:08:06 crc kubenswrapper[4833]: I0219 13:08:06.746670 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="765c6d2d-c493-4ade-80cc-27121aad8038" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:08:06 crc kubenswrapper[4833]: I0219 13:08:06.746721 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="765c6d2d-c493-4ade-80cc-27121aad8038" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:08:07 crc kubenswrapper[4833]: E0219 13:08:07.373543 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 13:08:07 crc kubenswrapper[4833]: E0219 13:08:07.375596 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 13:08:07 crc kubenswrapper[4833]: E0219 13:08:07.376901 4833 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 19 13:08:07 crc kubenswrapper[4833]: E0219 13:08:07.376936 4833 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dd0bc286-f8c6-436b-926a-0aedf7504098" containerName="nova-scheduler-scheduler" Feb 19 13:08:07 crc kubenswrapper[4833]: I0219 13:08:07.554998 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="765c6d2d-c493-4ade-80cc-27121aad8038" containerName="nova-api-log" containerID="cri-o://45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667" gracePeriod=30 Feb 19 13:08:07 crc kubenswrapper[4833]: I0219 13:08:07.555125 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="765c6d2d-c493-4ade-80cc-27121aad8038" containerName="nova-api-api" containerID="cri-o://714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34" gracePeriod=30 Feb 19 13:08:08 crc kubenswrapper[4833]: I0219 13:08:08.567525 4833 generic.go:334] "Generic (PLEG): container finished" podID="765c6d2d-c493-4ade-80cc-27121aad8038" containerID="45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667" exitCode=143 Feb 19 13:08:08 crc kubenswrapper[4833]: I0219 13:08:08.567582 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"765c6d2d-c493-4ade-80cc-27121aad8038","Type":"ContainerDied","Data":"45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667"} Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.415936 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.559106 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/433a5df1-d123-4962-b66f-4d4ef7abaa50-logs\") pod \"433a5df1-d123-4962-b66f-4d4ef7abaa50\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.559323 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4frgk\" (UniqueName: \"kubernetes.io/projected/433a5df1-d123-4962-b66f-4d4ef7abaa50-kube-api-access-4frgk\") pod \"433a5df1-d123-4962-b66f-4d4ef7abaa50\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.559365 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-combined-ca-bundle\") pod \"433a5df1-d123-4962-b66f-4d4ef7abaa50\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.559484 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-nova-metadata-tls-certs\") pod \"433a5df1-d123-4962-b66f-4d4ef7abaa50\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.559535 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-config-data\") pod \"433a5df1-d123-4962-b66f-4d4ef7abaa50\" (UID: \"433a5df1-d123-4962-b66f-4d4ef7abaa50\") " Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.559711 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/433a5df1-d123-4962-b66f-4d4ef7abaa50-logs" (OuterVolumeSpecName: "logs") pod "433a5df1-d123-4962-b66f-4d4ef7abaa50" (UID: "433a5df1-d123-4962-b66f-4d4ef7abaa50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.560343 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/433a5df1-d123-4962-b66f-4d4ef7abaa50-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.566089 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/433a5df1-d123-4962-b66f-4d4ef7abaa50-kube-api-access-4frgk" (OuterVolumeSpecName: "kube-api-access-4frgk") pod "433a5df1-d123-4962-b66f-4d4ef7abaa50" (UID: "433a5df1-d123-4962-b66f-4d4ef7abaa50"). InnerVolumeSpecName "kube-api-access-4frgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.582393 4833 generic.go:334] "Generic (PLEG): container finished" podID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerID="9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13" exitCode=0 Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.582450 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"433a5df1-d123-4962-b66f-4d4ef7abaa50","Type":"ContainerDied","Data":"9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13"} Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.582466 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.582526 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"433a5df1-d123-4962-b66f-4d4ef7abaa50","Type":"ContainerDied","Data":"56b7640a5772ee9a8290525069bf66ca92f3776dc7fe6e81145efd751d882d5f"} Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.582561 4833 scope.go:117] "RemoveContainer" containerID="9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.592795 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "433a5df1-d123-4962-b66f-4d4ef7abaa50" (UID: "433a5df1-d123-4962-b66f-4d4ef7abaa50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.614360 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-config-data" (OuterVolumeSpecName: "config-data") pod "433a5df1-d123-4962-b66f-4d4ef7abaa50" (UID: "433a5df1-d123-4962-b66f-4d4ef7abaa50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.622300 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "433a5df1-d123-4962-b66f-4d4ef7abaa50" (UID: "433a5df1-d123-4962-b66f-4d4ef7abaa50"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.656202 4833 scope.go:117] "RemoveContainer" containerID="dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.662861 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4frgk\" (UniqueName: \"kubernetes.io/projected/433a5df1-d123-4962-b66f-4d4ef7abaa50-kube-api-access-4frgk\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.662903 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.662922 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.662939 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/433a5df1-d123-4962-b66f-4d4ef7abaa50-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.681112 4833 scope.go:117] "RemoveContainer" containerID="9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13" Feb 19 13:08:09 crc kubenswrapper[4833]: E0219 13:08:09.681586 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13\": container with ID starting with 9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13 not found: ID does not exist" containerID="9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.681674 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13"} err="failed to get container status \"9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13\": rpc error: code = NotFound desc = could not find container \"9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13\": container with ID starting with 9cabf63153cd1dc1969f298e480f66e3e7afd981d4624eaba139ce46b9b23a13 not found: ID does not exist" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.681716 4833 scope.go:117] "RemoveContainer" containerID="dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f" Feb 19 13:08:09 crc kubenswrapper[4833]: E0219 13:08:09.682164 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f\": container with ID starting with dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f not found: ID does not exist" containerID="dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.682195 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f"} err="failed to get container status \"dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f\": rpc error: code = NotFound desc = could not find container \"dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f\": container with ID starting with dfdf243c45cfb478af8394d24bb81c5e511f3d0b7921cce39997f4b62e743d0f not found: ID does not exist" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.935916 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.948305 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.982864 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:08:09 crc kubenswrapper[4833]: E0219 13:08:09.983287 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c84a4ef-18de-46c4-badf-8feafe986252" containerName="dnsmasq-dns" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.983302 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c84a4ef-18de-46c4-badf-8feafe986252" containerName="dnsmasq-dns" Feb 19 13:08:09 crc kubenswrapper[4833]: E0219 13:08:09.983317 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerName="nova-metadata-metadata" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.983325 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerName="nova-metadata-metadata" Feb 19 13:08:09 crc kubenswrapper[4833]: E0219 13:08:09.983335 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c84a4ef-18de-46c4-badf-8feafe986252" containerName="init" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.983343 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c84a4ef-18de-46c4-badf-8feafe986252" containerName="init" Feb 19 13:08:09 crc kubenswrapper[4833]: E0219 13:08:09.983354 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerName="nova-metadata-log" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.983361 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerName="nova-metadata-log" Feb 19 13:08:09 crc kubenswrapper[4833]: E0219 13:08:09.983388 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b18380f-3674-4e04-a34d-bf81ba3c58c8" containerName="nova-manage" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.983396 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b18380f-3674-4e04-a34d-bf81ba3c58c8" containerName="nova-manage" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.983661 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerName="nova-metadata-metadata" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.983679 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c84a4ef-18de-46c4-badf-8feafe986252" containerName="dnsmasq-dns" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.983694 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="433a5df1-d123-4962-b66f-4d4ef7abaa50" containerName="nova-metadata-log" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.983709 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b18380f-3674-4e04-a34d-bf81ba3c58c8" containerName="nova-manage" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.984820 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.987652 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 19 13:08:09 crc kubenswrapper[4833]: I0219 13:08:09.995098 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.005282 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.070439 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a4389e-efff-4621-bc9d-548f8c2b78f9-config-data\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.070576 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59a4389e-efff-4621-bc9d-548f8c2b78f9-logs\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.070615 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9c8\" (UniqueName: \"kubernetes.io/projected/59a4389e-efff-4621-bc9d-548f8c2b78f9-kube-api-access-6s9c8\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.070761 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/59a4389e-efff-4621-bc9d-548f8c2b78f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.070832 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a4389e-efff-4621-bc9d-548f8c2b78f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.173899 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a4389e-efff-4621-bc9d-548f8c2b78f9-config-data\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.174067 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59a4389e-efff-4621-bc9d-548f8c2b78f9-logs\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.174253 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s9c8\" (UniqueName: \"kubernetes.io/projected/59a4389e-efff-4621-bc9d-548f8c2b78f9-kube-api-access-6s9c8\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.174746 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59a4389e-efff-4621-bc9d-548f8c2b78f9-logs\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.175157 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/59a4389e-efff-4621-bc9d-548f8c2b78f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.175344 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a4389e-efff-4621-bc9d-548f8c2b78f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.178930 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/59a4389e-efff-4621-bc9d-548f8c2b78f9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.179399 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a4389e-efff-4621-bc9d-548f8c2b78f9-config-data\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.179803 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a4389e-efff-4621-bc9d-548f8c2b78f9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.194402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s9c8\" (UniqueName: \"kubernetes.io/projected/59a4389e-efff-4621-bc9d-548f8c2b78f9-kube-api-access-6s9c8\") pod \"nova-metadata-0\" (UID: \"59a4389e-efff-4621-bc9d-548f8c2b78f9\") " pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.329682 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="433a5df1-d123-4962-b66f-4d4ef7abaa50" path="/var/lib/kubelet/pods/433a5df1-d123-4962-b66f-4d4ef7abaa50/volumes" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.331679 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 19 13:08:10 crc kubenswrapper[4833]: I0219 13:08:10.771874 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 19 13:08:10 crc kubenswrapper[4833]: W0219 13:08:10.778003 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59a4389e_efff_4621_bc9d_548f8c2b78f9.slice/crio-840f35b2bc0acc67fe1e475d0994e62e191c8480b8c36dc8f2a2617e1dd44a4e WatchSource:0}: Error finding container 840f35b2bc0acc67fe1e475d0994e62e191c8480b8c36dc8f2a2617e1dd44a4e: Status 404 returned error can't find the container with id 840f35b2bc0acc67fe1e475d0994e62e191c8480b8c36dc8f2a2617e1dd44a4e Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.349550 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.503075 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-config-data\") pod \"dd0bc286-f8c6-436b-926a-0aedf7504098\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.503611 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czsfr\" (UniqueName: \"kubernetes.io/projected/dd0bc286-f8c6-436b-926a-0aedf7504098-kube-api-access-czsfr\") pod \"dd0bc286-f8c6-436b-926a-0aedf7504098\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.503819 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-combined-ca-bundle\") pod \"dd0bc286-f8c6-436b-926a-0aedf7504098\" (UID: \"dd0bc286-f8c6-436b-926a-0aedf7504098\") " Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.512716 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0bc286-f8c6-436b-926a-0aedf7504098-kube-api-access-czsfr" (OuterVolumeSpecName: "kube-api-access-czsfr") pod "dd0bc286-f8c6-436b-926a-0aedf7504098" (UID: "dd0bc286-f8c6-436b-926a-0aedf7504098"). InnerVolumeSpecName "kube-api-access-czsfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.531258 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-config-data" (OuterVolumeSpecName: "config-data") pod "dd0bc286-f8c6-436b-926a-0aedf7504098" (UID: "dd0bc286-f8c6-436b-926a-0aedf7504098"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.533662 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd0bc286-f8c6-436b-926a-0aedf7504098" (UID: "dd0bc286-f8c6-436b-926a-0aedf7504098"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.603867 4833 generic.go:334] "Generic (PLEG): container finished" podID="dd0bc286-f8c6-436b-926a-0aedf7504098" containerID="2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f" exitCode=0 Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.603937 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dd0bc286-f8c6-436b-926a-0aedf7504098","Type":"ContainerDied","Data":"2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f"} Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.603943 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.603964 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dd0bc286-f8c6-436b-926a-0aedf7504098","Type":"ContainerDied","Data":"eff841d256aceefcc7e29ed89d643db1b975438137ea04b6124bbbda742e5ccb"} Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.603983 4833 scope.go:117] "RemoveContainer" containerID="2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.606907 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"59a4389e-efff-4621-bc9d-548f8c2b78f9","Type":"ContainerStarted","Data":"05f1365682cef08e01a13ac41d07add8ea395a622b7a585a716c7e4dd97f87aa"} Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.606956 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"59a4389e-efff-4621-bc9d-548f8c2b78f9","Type":"ContainerStarted","Data":"504b2aa800f6ba51844515f0ef59bade4fb8874476721124c539e733112d8485"} Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.606968 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"59a4389e-efff-4621-bc9d-548f8c2b78f9","Type":"ContainerStarted","Data":"840f35b2bc0acc67fe1e475d0994e62e191c8480b8c36dc8f2a2617e1dd44a4e"} Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.607666 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.607809 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd0bc286-f8c6-436b-926a-0aedf7504098-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.607824 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czsfr\" (UniqueName: \"kubernetes.io/projected/dd0bc286-f8c6-436b-926a-0aedf7504098-kube-api-access-czsfr\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.627841 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.627825415 podStartE2EDuration="2.627825415s" podCreationTimestamp="2026-02-19 13:08:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:08:11.627404924 +0000 UTC m=+1302.022923692" watchObservedRunningTime="2026-02-19 13:08:11.627825415 +0000 UTC m=+1302.023344183" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.643063 4833 scope.go:117] "RemoveContainer" containerID="2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f" Feb 19 13:08:11 crc kubenswrapper[4833]: E0219 13:08:11.643553 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f\": container with ID starting with 2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f not found: ID does not exist" containerID="2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.643587 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f"} err="failed to get container status \"2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f\": rpc error: code = NotFound desc = could not find container \"2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f\": container with ID starting with 2b50f92165330ebefdddbb58d950a85b98024973b7b74025113b1db6a925387f not found: ID does not exist" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.648654 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.685245 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.697529 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:08:11 crc kubenswrapper[4833]: E0219 13:08:11.698032 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0bc286-f8c6-436b-926a-0aedf7504098" containerName="nova-scheduler-scheduler" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.698053 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0bc286-f8c6-436b-926a-0aedf7504098" containerName="nova-scheduler-scheduler" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.699038 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0bc286-f8c6-436b-926a-0aedf7504098" containerName="nova-scheduler-scheduler" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.700054 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.704630 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.705069 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.814843 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnxpv\" (UniqueName: \"kubernetes.io/projected/b99a7fca-b744-4c37-abf6-76f23e90f7da-kube-api-access-dnxpv\") pod \"nova-scheduler-0\" (UID: \"b99a7fca-b744-4c37-abf6-76f23e90f7da\") " pod="openstack/nova-scheduler-0" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.814925 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99a7fca-b744-4c37-abf6-76f23e90f7da-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b99a7fca-b744-4c37-abf6-76f23e90f7da\") " pod="openstack/nova-scheduler-0" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.815010 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99a7fca-b744-4c37-abf6-76f23e90f7da-config-data\") pod \"nova-scheduler-0\" (UID: \"b99a7fca-b744-4c37-abf6-76f23e90f7da\") " pod="openstack/nova-scheduler-0" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.916725 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnxpv\" (UniqueName: \"kubernetes.io/projected/b99a7fca-b744-4c37-abf6-76f23e90f7da-kube-api-access-dnxpv\") pod \"nova-scheduler-0\" (UID: \"b99a7fca-b744-4c37-abf6-76f23e90f7da\") " pod="openstack/nova-scheduler-0" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.917221 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99a7fca-b744-4c37-abf6-76f23e90f7da-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b99a7fca-b744-4c37-abf6-76f23e90f7da\") " pod="openstack/nova-scheduler-0" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.917577 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99a7fca-b744-4c37-abf6-76f23e90f7da-config-data\") pod \"nova-scheduler-0\" (UID: \"b99a7fca-b744-4c37-abf6-76f23e90f7da\") " pod="openstack/nova-scheduler-0" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.922821 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b99a7fca-b744-4c37-abf6-76f23e90f7da-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b99a7fca-b744-4c37-abf6-76f23e90f7da\") " pod="openstack/nova-scheduler-0" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.923329 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b99a7fca-b744-4c37-abf6-76f23e90f7da-config-data\") pod \"nova-scheduler-0\" (UID: \"b99a7fca-b744-4c37-abf6-76f23e90f7da\") " pod="openstack/nova-scheduler-0" Feb 19 13:08:11 crc kubenswrapper[4833]: I0219 13:08:11.947242 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnxpv\" (UniqueName: \"kubernetes.io/projected/b99a7fca-b744-4c37-abf6-76f23e90f7da-kube-api-access-dnxpv\") pod \"nova-scheduler-0\" (UID: \"b99a7fca-b744-4c37-abf6-76f23e90f7da\") " pod="openstack/nova-scheduler-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.020991 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.326673 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0bc286-f8c6-436b-926a-0aedf7504098" path="/var/lib/kubelet/pods/dd0bc286-f8c6-436b-926a-0aedf7504098/volumes" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.483127 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.527641 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-internal-tls-certs\") pod \"765c6d2d-c493-4ade-80cc-27121aad8038\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.527679 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-config-data\") pod \"765c6d2d-c493-4ade-80cc-27121aad8038\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.527802 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/765c6d2d-c493-4ade-80cc-27121aad8038-logs\") pod \"765c6d2d-c493-4ade-80cc-27121aad8038\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.527842 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-public-tls-certs\") pod \"765c6d2d-c493-4ade-80cc-27121aad8038\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.527862 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bmmg\" (UniqueName: \"kubernetes.io/projected/765c6d2d-c493-4ade-80cc-27121aad8038-kube-api-access-8bmmg\") pod \"765c6d2d-c493-4ade-80cc-27121aad8038\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.527911 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-combined-ca-bundle\") pod \"765c6d2d-c493-4ade-80cc-27121aad8038\" (UID: \"765c6d2d-c493-4ade-80cc-27121aad8038\") " Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.529010 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/765c6d2d-c493-4ade-80cc-27121aad8038-logs" (OuterVolumeSpecName: "logs") pod "765c6d2d-c493-4ade-80cc-27121aad8038" (UID: "765c6d2d-c493-4ade-80cc-27121aad8038"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.534635 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/765c6d2d-c493-4ade-80cc-27121aad8038-kube-api-access-8bmmg" (OuterVolumeSpecName: "kube-api-access-8bmmg") pod "765c6d2d-c493-4ade-80cc-27121aad8038" (UID: "765c6d2d-c493-4ade-80cc-27121aad8038"). InnerVolumeSpecName "kube-api-access-8bmmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.561040 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-config-data" (OuterVolumeSpecName: "config-data") pod "765c6d2d-c493-4ade-80cc-27121aad8038" (UID: "765c6d2d-c493-4ade-80cc-27121aad8038"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.584781 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "765c6d2d-c493-4ade-80cc-27121aad8038" (UID: "765c6d2d-c493-4ade-80cc-27121aad8038"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.593133 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "765c6d2d-c493-4ade-80cc-27121aad8038" (UID: "765c6d2d-c493-4ade-80cc-27121aad8038"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.603887 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "765c6d2d-c493-4ade-80cc-27121aad8038" (UID: "765c6d2d-c493-4ade-80cc-27121aad8038"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.624649 4833 generic.go:334] "Generic (PLEG): container finished" podID="765c6d2d-c493-4ade-80cc-27121aad8038" containerID="714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34" exitCode=0 Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.624721 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.624713 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"765c6d2d-c493-4ade-80cc-27121aad8038","Type":"ContainerDied","Data":"714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34"} Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.624863 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"765c6d2d-c493-4ade-80cc-27121aad8038","Type":"ContainerDied","Data":"6953ac2001ea81127f01c57dcbcccef389d46cdf84bb2f6f05bbe3f5774d19d6"} Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.624894 4833 scope.go:117] "RemoveContainer" containerID="714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.627337 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.629949 4833 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/765c6d2d-c493-4ade-80cc-27121aad8038-logs\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.629972 4833 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.629984 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bmmg\" (UniqueName: \"kubernetes.io/projected/765c6d2d-c493-4ade-80cc-27121aad8038-kube-api-access-8bmmg\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.629994 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.630002 4833 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.630010 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/765c6d2d-c493-4ade-80cc-27121aad8038-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:12 crc kubenswrapper[4833]: W0219 13:08:12.635645 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb99a7fca_b744_4c37_abf6_76f23e90f7da.slice/crio-fd67886bb498b5e865663e0dacc39b23bf2675bf4cfe8bc25eed064da8d7b375 WatchSource:0}: Error finding container fd67886bb498b5e865663e0dacc39b23bf2675bf4cfe8bc25eed064da8d7b375: Status 404 returned error can't find the container with id fd67886bb498b5e865663e0dacc39b23bf2675bf4cfe8bc25eed064da8d7b375 Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.663934 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.673642 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.683041 4833 scope.go:117] "RemoveContainer" containerID="45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.693272 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 19 13:08:12 crc kubenswrapper[4833]: E0219 13:08:12.693735 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765c6d2d-c493-4ade-80cc-27121aad8038" containerName="nova-api-log" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.693757 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="765c6d2d-c493-4ade-80cc-27121aad8038" containerName="nova-api-log" Feb 19 13:08:12 crc kubenswrapper[4833]: E0219 13:08:12.693801 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="765c6d2d-c493-4ade-80cc-27121aad8038" containerName="nova-api-api" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.693810 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="765c6d2d-c493-4ade-80cc-27121aad8038" containerName="nova-api-api" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.694023 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="765c6d2d-c493-4ade-80cc-27121aad8038" containerName="nova-api-api" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.694048 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="765c6d2d-c493-4ade-80cc-27121aad8038" containerName="nova-api-log" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.695248 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.697475 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.699269 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.700090 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.701275 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.729401 4833 scope.go:117] "RemoveContainer" containerID="714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34" Feb 19 13:08:12 crc kubenswrapper[4833]: E0219 13:08:12.730164 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34\": container with ID starting with 714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34 not found: ID does not exist" containerID="714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.730200 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34"} err="failed to get container status \"714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34\": rpc error: code = NotFound desc = could not find container \"714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34\": container with ID starting with 714b36568cf51e211c24f9144210d718b15a249eb58de8f4406ac4ddef539a34 not found: ID does not exist" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.730222 4833 scope.go:117] "RemoveContainer" containerID="45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.731439 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6hl\" (UniqueName: \"kubernetes.io/projected/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-kube-api-access-9d6hl\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.731608 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.731656 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-public-tls-certs\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.731680 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-logs\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.731716 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.731748 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-config-data\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: E0219 13:08:12.732875 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667\": container with ID starting with 45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667 not found: ID does not exist" containerID="45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.733005 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667"} err="failed to get container status \"45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667\": rpc error: code = NotFound desc = could not find container \"45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667\": container with ID starting with 45e58bbac70e2b1540d78e43f50321f21fc2deebc33dbe35ba6aba120ed23667 not found: ID does not exist" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.833989 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.834044 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-public-tls-certs\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.834071 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-logs\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.834103 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.834127 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-config-data\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.834224 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d6hl\" (UniqueName: \"kubernetes.io/projected/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-kube-api-access-9d6hl\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.834997 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-logs\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.838897 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-public-tls-certs\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.839050 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.839478 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.839592 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-config-data\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:12 crc kubenswrapper[4833]: I0219 13:08:12.854262 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d6hl\" (UniqueName: \"kubernetes.io/projected/70e644c8-55f1-4d68-8cfc-f4a12ed42ec2-kube-api-access-9d6hl\") pod \"nova-api-0\" (UID: \"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2\") " pod="openstack/nova-api-0" Feb 19 13:08:13 crc kubenswrapper[4833]: I0219 13:08:13.028948 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 19 13:08:13 crc kubenswrapper[4833]: I0219 13:08:13.606289 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 19 13:08:13 crc kubenswrapper[4833]: I0219 13:08:13.641910 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2","Type":"ContainerStarted","Data":"250b11671e3040a51cbc9ffde70c0f1d092b5fd750dfedb22a61a84aed57a29d"} Feb 19 13:08:13 crc kubenswrapper[4833]: I0219 13:08:13.644135 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b99a7fca-b744-4c37-abf6-76f23e90f7da","Type":"ContainerStarted","Data":"308948bb5259c7507573ca4f7f3b52af4cbd4e0249d096b7c612081c3573f4aa"} Feb 19 13:08:13 crc kubenswrapper[4833]: I0219 13:08:13.644179 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b99a7fca-b744-4c37-abf6-76f23e90f7da","Type":"ContainerStarted","Data":"fd67886bb498b5e865663e0dacc39b23bf2675bf4cfe8bc25eed064da8d7b375"} Feb 19 13:08:13 crc kubenswrapper[4833]: I0219 13:08:13.668353 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.668325162 podStartE2EDuration="2.668325162s" podCreationTimestamp="2026-02-19 13:08:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:08:13.666851433 +0000 UTC m=+1304.062370231" watchObservedRunningTime="2026-02-19 13:08:13.668325162 +0000 UTC m=+1304.063843930" Feb 19 13:08:14 crc kubenswrapper[4833]: I0219 13:08:14.334209 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="765c6d2d-c493-4ade-80cc-27121aad8038" path="/var/lib/kubelet/pods/765c6d2d-c493-4ade-80cc-27121aad8038/volumes" Feb 19 13:08:14 crc kubenswrapper[4833]: I0219 13:08:14.671677 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2","Type":"ContainerStarted","Data":"4195511b726b68789af8463da81fc89f9e05e8615c48815b5bb29f07e2bca369"} Feb 19 13:08:14 crc kubenswrapper[4833]: I0219 13:08:14.671752 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"70e644c8-55f1-4d68-8cfc-f4a12ed42ec2","Type":"ContainerStarted","Data":"71d4402d6da84391ad9d67937741e012d6917f6c0cb0992ebe138b429fe42e6a"} Feb 19 13:08:14 crc kubenswrapper[4833]: I0219 13:08:14.712116 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.712095158 podStartE2EDuration="2.712095158s" podCreationTimestamp="2026-02-19 13:08:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:08:14.70130404 +0000 UTC m=+1305.096822828" watchObservedRunningTime="2026-02-19 13:08:14.712095158 +0000 UTC m=+1305.107613936" Feb 19 13:08:15 crc kubenswrapper[4833]: I0219 13:08:15.332543 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:08:15 crc kubenswrapper[4833]: I0219 13:08:15.332654 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 19 13:08:17 crc kubenswrapper[4833]: I0219 13:08:17.021196 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 19 13:08:20 crc kubenswrapper[4833]: I0219 13:08:20.333276 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 13:08:20 crc kubenswrapper[4833]: I0219 13:08:20.334442 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 19 13:08:21 crc kubenswrapper[4833]: I0219 13:08:21.341730 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="59a4389e-efff-4621-bc9d-548f8c2b78f9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:08:21 crc kubenswrapper[4833]: I0219 13:08:21.341735 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="59a4389e-efff-4621-bc9d-548f8c2b78f9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:08:22 crc kubenswrapper[4833]: I0219 13:08:22.021637 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 19 13:08:22 crc kubenswrapper[4833]: I0219 13:08:22.049174 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 19 13:08:22 crc kubenswrapper[4833]: I0219 13:08:22.797207 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 19 13:08:23 crc kubenswrapper[4833]: I0219 13:08:23.029354 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:08:23 crc kubenswrapper[4833]: I0219 13:08:23.029440 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 19 13:08:24 crc kubenswrapper[4833]: I0219 13:08:24.039631 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="70e644c8-55f1-4d68-8cfc-f4a12ed42ec2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:08:24 crc kubenswrapper[4833]: I0219 13:08:24.039631 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="70e644c8-55f1-4d68-8cfc-f4a12ed42ec2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 19 13:08:28 crc kubenswrapper[4833]: I0219 13:08:28.817872 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 19 13:08:30 crc kubenswrapper[4833]: I0219 13:08:30.337816 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 13:08:30 crc kubenswrapper[4833]: I0219 13:08:30.343440 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 13:08:30 crc kubenswrapper[4833]: I0219 13:08:30.347733 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 19 13:08:30 crc kubenswrapper[4833]: I0219 13:08:30.861752 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 19 13:08:33 crc kubenswrapper[4833]: I0219 13:08:33.038487 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 13:08:33 crc kubenswrapper[4833]: I0219 13:08:33.039104 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 13:08:33 crc kubenswrapper[4833]: I0219 13:08:33.040271 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 19 13:08:33 crc kubenswrapper[4833]: I0219 13:08:33.049889 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 13:08:33 crc kubenswrapper[4833]: I0219 13:08:33.886073 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 19 13:08:33 crc kubenswrapper[4833]: I0219 13:08:33.893923 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 19 13:08:42 crc kubenswrapper[4833]: I0219 13:08:42.204169 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:08:43 crc kubenswrapper[4833]: I0219 13:08:43.194714 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:08:46 crc kubenswrapper[4833]: I0219 13:08:46.481616 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="9c07579b-ab54-4267-83d6-1d6c0404ba3e" containerName="rabbitmq" containerID="cri-o://24555c0362540584930eb2b0f67b08f0230f630668eeb48fbcb6a570f233da57" gracePeriod=604796 Feb 19 13:08:47 crc kubenswrapper[4833]: I0219 13:08:47.220378 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a356e13b-39de-4d0b-aa58-f2dc6d3179fb" containerName="rabbitmq" containerID="cri-o://b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe" gracePeriod=604796 Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.114584 4833 generic.go:334] "Generic (PLEG): container finished" podID="9c07579b-ab54-4267-83d6-1d6c0404ba3e" containerID="24555c0362540584930eb2b0f67b08f0230f630668eeb48fbcb6a570f233da57" exitCode=0 Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.114630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c07579b-ab54-4267-83d6-1d6c0404ba3e","Type":"ContainerDied","Data":"24555c0362540584930eb2b0f67b08f0230f630668eeb48fbcb6a570f233da57"} Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.115125 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c07579b-ab54-4267-83d6-1d6c0404ba3e","Type":"ContainerDied","Data":"32acf339673f51b3a8356c341f3e232bd60b365d32763c70a5cad033d302ba5c"} Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.115139 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32acf339673f51b3a8356c341f3e232bd60b365d32763c70a5cad033d302ba5c" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.136520 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.160475 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smgt2\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-kube-api-access-smgt2\") pod \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.160543 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-config-data\") pod \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.160601 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c07579b-ab54-4267-83d6-1d6c0404ba3e-erlang-cookie-secret\") pod \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.160666 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-erlang-cookie\") pod \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.160725 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-server-conf\") pod \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.160757 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-confd\") pod \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.160799 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c07579b-ab54-4267-83d6-1d6c0404ba3e-pod-info\") pod \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.160847 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-tls\") pod \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.160877 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-plugins-conf\") pod \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.160914 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-plugins\") pod \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.160937 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\" (UID: \"9c07579b-ab54-4267-83d6-1d6c0404ba3e\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.161765 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9c07579b-ab54-4267-83d6-1d6c0404ba3e" (UID: "9c07579b-ab54-4267-83d6-1d6c0404ba3e"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.162015 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9c07579b-ab54-4267-83d6-1d6c0404ba3e" (UID: "9c07579b-ab54-4267-83d6-1d6c0404ba3e"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.162046 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.162941 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9c07579b-ab54-4267-83d6-1d6c0404ba3e" (UID: "9c07579b-ab54-4267-83d6-1d6c0404ba3e"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.170205 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c07579b-ab54-4267-83d6-1d6c0404ba3e-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9c07579b-ab54-4267-83d6-1d6c0404ba3e" (UID: "9c07579b-ab54-4267-83d6-1d6c0404ba3e"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.181660 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-kube-api-access-smgt2" (OuterVolumeSpecName: "kube-api-access-smgt2") pod "9c07579b-ab54-4267-83d6-1d6c0404ba3e" (UID: "9c07579b-ab54-4267-83d6-1d6c0404ba3e"). InnerVolumeSpecName "kube-api-access-smgt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.187042 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9c07579b-ab54-4267-83d6-1d6c0404ba3e" (UID: "9c07579b-ab54-4267-83d6-1d6c0404ba3e"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.195080 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9c07579b-ab54-4267-83d6-1d6c0404ba3e-pod-info" (OuterVolumeSpecName: "pod-info") pod "9c07579b-ab54-4267-83d6-1d6c0404ba3e" (UID: "9c07579b-ab54-4267-83d6-1d6c0404ba3e"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.196624 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "9c07579b-ab54-4267-83d6-1d6c0404ba3e" (UID: "9c07579b-ab54-4267-83d6-1d6c0404ba3e"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.250141 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-config-data" (OuterVolumeSpecName: "config-data") pod "9c07579b-ab54-4267-83d6-1d6c0404ba3e" (UID: "9c07579b-ab54-4267-83d6-1d6c0404ba3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.265504 4833 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c07579b-ab54-4267-83d6-1d6c0404ba3e-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.265742 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.265809 4833 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.265896 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.265985 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.266054 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smgt2\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-kube-api-access-smgt2\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.266117 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.266252 4833 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c07579b-ab54-4267-83d6-1d6c0404ba3e-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.306575 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.328963 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9c07579b-ab54-4267-83d6-1d6c0404ba3e" (UID: "9c07579b-ab54-4267-83d6-1d6c0404ba3e"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.339873 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-server-conf" (OuterVolumeSpecName: "server-conf") pod "9c07579b-ab54-4267-83d6-1d6c0404ba3e" (UID: "9c07579b-ab54-4267-83d6-1d6c0404ba3e"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.370624 4833 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c07579b-ab54-4267-83d6-1d6c0404ba3e-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.370660 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c07579b-ab54-4267-83d6-1d6c0404ba3e-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.370674 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.815844 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.878672 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-confd\") pod \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.878744 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-erlang-cookie\") pod \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.878838 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-config-data\") pod \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.878945 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-tls\") pod \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.879013 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-erlang-cookie-secret\") pod \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.879093 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-plugins\") pod \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.879203 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gql7f\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-kube-api-access-gql7f\") pod \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.880166 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-pod-info\") pod \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.880277 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-plugins-conf\") pod \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.880326 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-server-conf\") pod \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.880388 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\" (UID: \"a356e13b-39de-4d0b-aa58-f2dc6d3179fb\") " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.885259 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a356e13b-39de-4d0b-aa58-f2dc6d3179fb" (UID: "a356e13b-39de-4d0b-aa58-f2dc6d3179fb"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.888774 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a356e13b-39de-4d0b-aa58-f2dc6d3179fb" (UID: "a356e13b-39de-4d0b-aa58-f2dc6d3179fb"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.889164 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a356e13b-39de-4d0b-aa58-f2dc6d3179fb" (UID: "a356e13b-39de-4d0b-aa58-f2dc6d3179fb"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.894665 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a356e13b-39de-4d0b-aa58-f2dc6d3179fb" (UID: "a356e13b-39de-4d0b-aa58-f2dc6d3179fb"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.895178 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.895218 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.895235 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.895247 4833 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.896685 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a356e13b-39de-4d0b-aa58-f2dc6d3179fb" (UID: "a356e13b-39de-4d0b-aa58-f2dc6d3179fb"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.896729 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-kube-api-access-gql7f" (OuterVolumeSpecName: "kube-api-access-gql7f") pod "a356e13b-39de-4d0b-aa58-f2dc6d3179fb" (UID: "a356e13b-39de-4d0b-aa58-f2dc6d3179fb"). InnerVolumeSpecName "kube-api-access-gql7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.902202 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-pod-info" (OuterVolumeSpecName: "pod-info") pod "a356e13b-39de-4d0b-aa58-f2dc6d3179fb" (UID: "a356e13b-39de-4d0b-aa58-f2dc6d3179fb"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.914309 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "a356e13b-39de-4d0b-aa58-f2dc6d3179fb" (UID: "a356e13b-39de-4d0b-aa58-f2dc6d3179fb"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.925254 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-config-data" (OuterVolumeSpecName: "config-data") pod "a356e13b-39de-4d0b-aa58-f2dc6d3179fb" (UID: "a356e13b-39de-4d0b-aa58-f2dc6d3179fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.970070 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-server-conf" (OuterVolumeSpecName: "server-conf") pod "a356e13b-39de-4d0b-aa58-f2dc6d3179fb" (UID: "a356e13b-39de-4d0b-aa58-f2dc6d3179fb"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.996635 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gql7f\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-kube-api-access-gql7f\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.996677 4833 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-pod-info\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.996692 4833 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-server-conf\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.996727 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.996739 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:53 crc kubenswrapper[4833]: I0219 13:08:53.996751 4833 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.011713 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a356e13b-39de-4d0b-aa58-f2dc6d3179fb" (UID: "a356e13b-39de-4d0b-aa58-f2dc6d3179fb"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.033986 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.097933 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.097969 4833 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a356e13b-39de-4d0b-aa58-f2dc6d3179fb-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.127070 4833 generic.go:334] "Generic (PLEG): container finished" podID="a356e13b-39de-4d0b-aa58-f2dc6d3179fb" containerID="b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe" exitCode=0 Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.127163 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.128406 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a356e13b-39de-4d0b-aa58-f2dc6d3179fb","Type":"ContainerDied","Data":"b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe"} Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.128465 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a356e13b-39de-4d0b-aa58-f2dc6d3179fb","Type":"ContainerDied","Data":"0d9f54f3a20af8540dc70cec52456b2b60de31fa259ce62e928771149500f80a"} Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.128483 4833 scope.go:117] "RemoveContainer" containerID="b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.128648 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.173038 4833 scope.go:117] "RemoveContainer" containerID="1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.182255 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.203726 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.217228 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.231357 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.231966 4833 scope.go:117] "RemoveContainer" containerID="b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe" Feb 19 13:08:54 crc kubenswrapper[4833]: E0219 13:08:54.233103 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe\": container with ID starting with b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe not found: ID does not exist" containerID="b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.233135 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe"} err="failed to get container status \"b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe\": rpc error: code = NotFound desc = could not find container \"b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe\": container with ID starting with b227aa18147c3318f4439519614f4e559588bef3b2edb3e58892bc18369d9abe not found: ID does not exist" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.233162 4833 scope.go:117] "RemoveContainer" containerID="1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f" Feb 19 13:08:54 crc kubenswrapper[4833]: E0219 13:08:54.234161 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f\": container with ID starting with 1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f not found: ID does not exist" containerID="1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.234192 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f"} err="failed to get container status \"1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f\": rpc error: code = NotFound desc = could not find container \"1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f\": container with ID starting with 1f4138a564ad2060601baf14a1521b564d28aa8298dabde6f911b2c8db00a56f not found: ID does not exist" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.247485 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:08:54 crc kubenswrapper[4833]: E0219 13:08:54.247987 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c07579b-ab54-4267-83d6-1d6c0404ba3e" containerName="setup-container" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.248010 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c07579b-ab54-4267-83d6-1d6c0404ba3e" containerName="setup-container" Feb 19 13:08:54 crc kubenswrapper[4833]: E0219 13:08:54.248023 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a356e13b-39de-4d0b-aa58-f2dc6d3179fb" containerName="rabbitmq" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.248032 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a356e13b-39de-4d0b-aa58-f2dc6d3179fb" containerName="rabbitmq" Feb 19 13:08:54 crc kubenswrapper[4833]: E0219 13:08:54.248049 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c07579b-ab54-4267-83d6-1d6c0404ba3e" containerName="rabbitmq" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.248058 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c07579b-ab54-4267-83d6-1d6c0404ba3e" containerName="rabbitmq" Feb 19 13:08:54 crc kubenswrapper[4833]: E0219 13:08:54.248092 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a356e13b-39de-4d0b-aa58-f2dc6d3179fb" containerName="setup-container" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.248101 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="a356e13b-39de-4d0b-aa58-f2dc6d3179fb" containerName="setup-container" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.248329 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c07579b-ab54-4267-83d6-1d6c0404ba3e" containerName="rabbitmq" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.248351 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="a356e13b-39de-4d0b-aa58-f2dc6d3179fb" containerName="rabbitmq" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.249483 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.252109 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.252317 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.252786 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-jr9r6" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.252970 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.253119 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.253386 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.254271 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.265671 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.267345 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.270540 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.270729 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.270867 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.270967 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.271310 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.281830 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.287602 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8ddnc" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.296716 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.342998 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c07579b-ab54-4267-83d6-1d6c0404ba3e" path="/var/lib/kubelet/pods/9c07579b-ab54-4267-83d6-1d6c0404ba3e/volumes" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.350220 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a356e13b-39de-4d0b-aa58-f2dc6d3179fb" path="/var/lib/kubelet/pods/a356e13b-39de-4d0b-aa58-f2dc6d3179fb/volumes" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.352825 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407432 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95192227-96aa-4fa8-a7db-89f31efb056c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407485 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52361cb4-eea4-49c7-b06b-acbe0ad24450-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407527 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407547 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407580 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407596 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407613 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95192227-96aa-4fa8-a7db-89f31efb056c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407634 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407653 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmskq\" (UniqueName: \"kubernetes.io/projected/52361cb4-eea4-49c7-b06b-acbe0ad24450-kube-api-access-tmskq\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407675 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407705 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dv6\" (UniqueName: \"kubernetes.io/projected/95192227-96aa-4fa8-a7db-89f31efb056c-kube-api-access-x6dv6\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407737 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407757 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52361cb4-eea4-49c7-b06b-acbe0ad24450-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407775 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407824 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95192227-96aa-4fa8-a7db-89f31efb056c-config-data\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407841 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95192227-96aa-4fa8-a7db-89f31efb056c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407866 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407884 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95192227-96aa-4fa8-a7db-89f31efb056c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407902 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52361cb4-eea4-49c7-b06b-acbe0ad24450-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407924 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52361cb4-eea4-49c7-b06b-acbe0ad24450-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407940 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52361cb4-eea4-49c7-b06b-acbe0ad24450-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.407958 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509428 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509476 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509508 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95192227-96aa-4fa8-a7db-89f31efb056c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509529 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509553 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmskq\" (UniqueName: \"kubernetes.io/projected/52361cb4-eea4-49c7-b06b-acbe0ad24450-kube-api-access-tmskq\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509578 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509607 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6dv6\" (UniqueName: \"kubernetes.io/projected/95192227-96aa-4fa8-a7db-89f31efb056c-kube-api-access-x6dv6\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509640 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509663 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52361cb4-eea4-49c7-b06b-acbe0ad24450-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509680 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509702 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95192227-96aa-4fa8-a7db-89f31efb056c-config-data\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509716 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95192227-96aa-4fa8-a7db-89f31efb056c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509737 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509754 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95192227-96aa-4fa8-a7db-89f31efb056c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509771 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52361cb4-eea4-49c7-b06b-acbe0ad24450-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509793 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52361cb4-eea4-49c7-b06b-acbe0ad24450-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509808 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52361cb4-eea4-49c7-b06b-acbe0ad24450-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509825 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509876 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95192227-96aa-4fa8-a7db-89f31efb056c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509893 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52361cb4-eea4-49c7-b06b-acbe0ad24450-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509919 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.509936 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.510221 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.510484 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/95192227-96aa-4fa8-a7db-89f31efb056c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.510861 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.511124 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.511940 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/52361cb4-eea4-49c7-b06b-acbe0ad24450-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.512164 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/95192227-96aa-4fa8-a7db-89f31efb056c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.515624 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/52361cb4-eea4-49c7-b06b-acbe0ad24450-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.516574 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95192227-96aa-4fa8-a7db-89f31efb056c-config-data\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.516659 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.516994 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.517544 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52361cb4-eea4-49c7-b06b-acbe0ad24450-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.518056 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.527769 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/95192227-96aa-4fa8-a7db-89f31efb056c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.531326 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.533114 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/52361cb4-eea4-49c7-b06b-acbe0ad24450-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.533613 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.536125 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/52361cb4-eea4-49c7-b06b-acbe0ad24450-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.545564 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/52361cb4-eea4-49c7-b06b-acbe0ad24450-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.545735 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmskq\" (UniqueName: \"kubernetes.io/projected/52361cb4-eea4-49c7-b06b-acbe0ad24450-kube-api-access-tmskq\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.545777 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6dv6\" (UniqueName: \"kubernetes.io/projected/95192227-96aa-4fa8-a7db-89f31efb056c-kube-api-access-x6dv6\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.548971 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/95192227-96aa-4fa8-a7db-89f31efb056c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.549121 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/95192227-96aa-4fa8-a7db-89f31efb056c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.568694 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"52361cb4-eea4-49c7-b06b-acbe0ad24450\") " pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.576302 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"95192227-96aa-4fa8-a7db-89f31efb056c\") " pod="openstack/rabbitmq-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.589821 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:08:54 crc kubenswrapper[4833]: I0219 13:08:54.605763 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 19 13:08:55 crc kubenswrapper[4833]: I0219 13:08:55.084752 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 19 13:08:55 crc kubenswrapper[4833]: W0219 13:08:55.097792 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95192227_96aa_4fa8_a7db_89f31efb056c.slice/crio-98b04abb8240956598a65a1b5e7b6677931840968179e04b9c620a50cf24f2fc WatchSource:0}: Error finding container 98b04abb8240956598a65a1b5e7b6677931840968179e04b9c620a50cf24f2fc: Status 404 returned error can't find the container with id 98b04abb8240956598a65a1b5e7b6677931840968179e04b9c620a50cf24f2fc Feb 19 13:08:55 crc kubenswrapper[4833]: I0219 13:08:55.099203 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 19 13:08:55 crc kubenswrapper[4833]: W0219 13:08:55.100788 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52361cb4_eea4_49c7_b06b_acbe0ad24450.slice/crio-0e37a7ac93e813ab37d9f2463adc461bf8d998b19c9e9f9a91265e90e2d30f7e WatchSource:0}: Error finding container 0e37a7ac93e813ab37d9f2463adc461bf8d998b19c9e9f9a91265e90e2d30f7e: Status 404 returned error can't find the container with id 0e37a7ac93e813ab37d9f2463adc461bf8d998b19c9e9f9a91265e90e2d30f7e Feb 19 13:08:55 crc kubenswrapper[4833]: I0219 13:08:55.144731 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"95192227-96aa-4fa8-a7db-89f31efb056c","Type":"ContainerStarted","Data":"98b04abb8240956598a65a1b5e7b6677931840968179e04b9c620a50cf24f2fc"} Feb 19 13:08:55 crc kubenswrapper[4833]: I0219 13:08:55.148393 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52361cb4-eea4-49c7-b06b-acbe0ad24450","Type":"ContainerStarted","Data":"0e37a7ac93e813ab37d9f2463adc461bf8d998b19c9e9f9a91265e90e2d30f7e"} Feb 19 13:08:56 crc kubenswrapper[4833]: I0219 13:08:56.835267 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-67z7k"] Feb 19 13:08:56 crc kubenswrapper[4833]: I0219 13:08:56.840375 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:56 crc kubenswrapper[4833]: I0219 13:08:56.881373 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 19 13:08:56 crc kubenswrapper[4833]: I0219 13:08:56.901951 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-67z7k"] Feb 19 13:08:56 crc kubenswrapper[4833]: I0219 13:08:56.965142 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:56 crc kubenswrapper[4833]: I0219 13:08:56.965242 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-config\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:56 crc kubenswrapper[4833]: I0219 13:08:56.965309 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fczkg\" (UniqueName: \"kubernetes.io/projected/c778bac9-060c-4348-b5e7-9844f37733a7-kube-api-access-fczkg\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:56 crc kubenswrapper[4833]: I0219 13:08:56.965370 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:56 crc kubenswrapper[4833]: I0219 13:08:56.965402 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:56 crc kubenswrapper[4833]: I0219 13:08:56.965477 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:56 crc kubenswrapper[4833]: I0219 13:08:56.965572 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-svc\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.066889 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.067003 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-config\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.067096 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fczkg\" (UniqueName: \"kubernetes.io/projected/c778bac9-060c-4348-b5e7-9844f37733a7-kube-api-access-fczkg\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.067134 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.067178 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.067200 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.067229 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-svc\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.067926 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.068263 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.068343 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-svc\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.068457 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.068459 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-config\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.069193 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.085638 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fczkg\" (UniqueName: \"kubernetes.io/projected/c778bac9-060c-4348-b5e7-9844f37733a7-kube-api-access-fczkg\") pod \"dnsmasq-dns-d558885bc-67z7k\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.167463 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"95192227-96aa-4fa8-a7db-89f31efb056c","Type":"ContainerStarted","Data":"5b8c1ed2bf5250d91a5a243092a3d937eb69a937f85b9687767cce5249c49584"} Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.170548 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52361cb4-eea4-49c7-b06b-acbe0ad24450","Type":"ContainerStarted","Data":"539f296eaafc758b0025440b70a433516efb09971624cb3848dfa3b78750848d"} Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.213921 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:57 crc kubenswrapper[4833]: W0219 13:08:57.727026 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc778bac9_060c_4348_b5e7_9844f37733a7.slice/crio-ec53fada7535a8f0c8446c5387bb053ac06cfd22cacc1ca5fe319b1dcde54dca WatchSource:0}: Error finding container ec53fada7535a8f0c8446c5387bb053ac06cfd22cacc1ca5fe319b1dcde54dca: Status 404 returned error can't find the container with id ec53fada7535a8f0c8446c5387bb053ac06cfd22cacc1ca5fe319b1dcde54dca Feb 19 13:08:57 crc kubenswrapper[4833]: I0219 13:08:57.727194 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-67z7k"] Feb 19 13:08:58 crc kubenswrapper[4833]: I0219 13:08:58.178930 4833 generic.go:334] "Generic (PLEG): container finished" podID="c778bac9-060c-4348-b5e7-9844f37733a7" containerID="97fc23b45a48c42b5c1fd8ce6b941ad6653fcaa1c37e2562366d97f40faa13a7" exitCode=0 Feb 19 13:08:58 crc kubenswrapper[4833]: I0219 13:08:58.179133 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-67z7k" event={"ID":"c778bac9-060c-4348-b5e7-9844f37733a7","Type":"ContainerDied","Data":"97fc23b45a48c42b5c1fd8ce6b941ad6653fcaa1c37e2562366d97f40faa13a7"} Feb 19 13:08:58 crc kubenswrapper[4833]: I0219 13:08:58.179375 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-67z7k" event={"ID":"c778bac9-060c-4348-b5e7-9844f37733a7","Type":"ContainerStarted","Data":"ec53fada7535a8f0c8446c5387bb053ac06cfd22cacc1ca5fe319b1dcde54dca"} Feb 19 13:08:59 crc kubenswrapper[4833]: I0219 13:08:59.194239 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-67z7k" event={"ID":"c778bac9-060c-4348-b5e7-9844f37733a7","Type":"ContainerStarted","Data":"61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4"} Feb 19 13:08:59 crc kubenswrapper[4833]: I0219 13:08:59.194620 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:08:59 crc kubenswrapper[4833]: I0219 13:08:59.231375 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-67z7k" podStartSLOduration=3.23134963 podStartE2EDuration="3.23134963s" podCreationTimestamp="2026-02-19 13:08:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:08:59.218654572 +0000 UTC m=+1349.614173400" watchObservedRunningTime="2026-02-19 13:08:59.23134963 +0000 UTC m=+1349.626868428" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.215900 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.287094 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-xtgk8"] Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.287450 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" podUID="9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" containerName="dnsmasq-dns" containerID="cri-o://35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed" gracePeriod=10 Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.449428 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-lg95r"] Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.451142 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.461213 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-lg95r"] Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.584839 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.585251 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.585271 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.585306 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.585340 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.585399 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-config\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.585476 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52kkf\" (UniqueName: \"kubernetes.io/projected/e9222a60-0b24-4d91-8002-74747339c9d5-kube-api-access-52kkf\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.686644 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.686764 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.686786 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.687596 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.687644 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.688151 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.688259 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.688285 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-config\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.688379 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52kkf\" (UniqueName: \"kubernetes.io/projected/e9222a60-0b24-4d91-8002-74747339c9d5-kube-api-access-52kkf\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.688701 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.689482 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-config\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.688168 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.689687 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e9222a60-0b24-4d91-8002-74747339c9d5-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.718804 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52kkf\" (UniqueName: \"kubernetes.io/projected/e9222a60-0b24-4d91-8002-74747339c9d5-kube-api-access-52kkf\") pod \"dnsmasq-dns-78c64bc9c5-lg95r\" (UID: \"e9222a60-0b24-4d91-8002-74747339c9d5\") " pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.784308 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.900625 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.997988 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-svc\") pod \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.999240 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-sb\") pod \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.999305 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qb88\" (UniqueName: \"kubernetes.io/projected/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-kube-api-access-6qb88\") pod \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.999380 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-config\") pod \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " Feb 19 13:09:07 crc kubenswrapper[4833]: I0219 13:09:07.999935 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-swift-storage-0\") pod \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.000305 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-nb\") pod \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\" (UID: \"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20\") " Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.005109 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-kube-api-access-6qb88" (OuterVolumeSpecName: "kube-api-access-6qb88") pod "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" (UID: "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20"). InnerVolumeSpecName "kube-api-access-6qb88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.046578 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" (UID: "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.052636 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-config" (OuterVolumeSpecName: "config") pod "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" (UID: "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.053370 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" (UID: "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.057613 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" (UID: "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.065031 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" (UID: "9fbd0cf0-2b11-4cf2-af96-a2ab369efe20"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.103421 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.103461 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.103473 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.103482 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.103490 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qb88\" (UniqueName: \"kubernetes.io/projected/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-kube-api-access-6qb88\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.103535 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.249546 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-lg95r"] Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.310795 4833 generic.go:334] "Generic (PLEG): container finished" podID="9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" containerID="35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed" exitCode=0 Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.310856 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" event={"ID":"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20","Type":"ContainerDied","Data":"35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed"} Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.310950 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" event={"ID":"9fbd0cf0-2b11-4cf2-af96-a2ab369efe20","Type":"ContainerDied","Data":"c338f8595d4a3d5f8237769c60b5ee393ce2f0ca8f90ce2155bd83c9498b9807"} Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.310986 4833 scope.go:117] "RemoveContainer" containerID="35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.311246 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-xtgk8" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.312362 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" event={"ID":"e9222a60-0b24-4d91-8002-74747339c9d5","Type":"ContainerStarted","Data":"6b245171835adbefdf672241b933dd7a317cb2004fef45d786b7da0f3cf83288"} Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.347600 4833 scope.go:117] "RemoveContainer" containerID="29e197fb9dffbbe5d6911419f369a9b1ffdf726132da1d32f47aa208c9174267" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.361359 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-xtgk8"] Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.371224 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-xtgk8"] Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.388525 4833 scope.go:117] "RemoveContainer" containerID="35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed" Feb 19 13:09:08 crc kubenswrapper[4833]: E0219 13:09:08.388912 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed\": container with ID starting with 35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed not found: ID does not exist" containerID="35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.388944 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed"} err="failed to get container status \"35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed\": rpc error: code = NotFound desc = could not find container \"35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed\": container with ID starting with 35edf2db1a7c42d070f348f8ca5ea679818dbcfd739d7fd4f2d9d8366873d6ed not found: ID does not exist" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.388964 4833 scope.go:117] "RemoveContainer" containerID="29e197fb9dffbbe5d6911419f369a9b1ffdf726132da1d32f47aa208c9174267" Feb 19 13:09:08 crc kubenswrapper[4833]: E0219 13:09:08.389251 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e197fb9dffbbe5d6911419f369a9b1ffdf726132da1d32f47aa208c9174267\": container with ID starting with 29e197fb9dffbbe5d6911419f369a9b1ffdf726132da1d32f47aa208c9174267 not found: ID does not exist" containerID="29e197fb9dffbbe5d6911419f369a9b1ffdf726132da1d32f47aa208c9174267" Feb 19 13:09:08 crc kubenswrapper[4833]: I0219 13:09:08.389334 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e197fb9dffbbe5d6911419f369a9b1ffdf726132da1d32f47aa208c9174267"} err="failed to get container status \"29e197fb9dffbbe5d6911419f369a9b1ffdf726132da1d32f47aa208c9174267\": rpc error: code = NotFound desc = could not find container \"29e197fb9dffbbe5d6911419f369a9b1ffdf726132da1d32f47aa208c9174267\": container with ID starting with 29e197fb9dffbbe5d6911419f369a9b1ffdf726132da1d32f47aa208c9174267 not found: ID does not exist" Feb 19 13:09:09 crc kubenswrapper[4833]: I0219 13:09:09.323343 4833 generic.go:334] "Generic (PLEG): container finished" podID="e9222a60-0b24-4d91-8002-74747339c9d5" containerID="ffe2e4ec3ce4ae5a1926017b0370b7171eb218e049c5e5e14f90e533ed6680f9" exitCode=0 Feb 19 13:09:09 crc kubenswrapper[4833]: I0219 13:09:09.323397 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" event={"ID":"e9222a60-0b24-4d91-8002-74747339c9d5","Type":"ContainerDied","Data":"ffe2e4ec3ce4ae5a1926017b0370b7171eb218e049c5e5e14f90e533ed6680f9"} Feb 19 13:09:10 crc kubenswrapper[4833]: I0219 13:09:10.329907 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" path="/var/lib/kubelet/pods/9fbd0cf0-2b11-4cf2-af96-a2ab369efe20/volumes" Feb 19 13:09:10 crc kubenswrapper[4833]: I0219 13:09:10.334016 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" event={"ID":"e9222a60-0b24-4d91-8002-74747339c9d5","Type":"ContainerStarted","Data":"6e60b931b71864ac7cf7a7c8fb39dab17b7afdffb9d201f8896bb71c19590e9a"} Feb 19 13:09:10 crc kubenswrapper[4833]: I0219 13:09:10.334217 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:10 crc kubenswrapper[4833]: I0219 13:09:10.362465 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" podStartSLOduration=3.362444297 podStartE2EDuration="3.362444297s" podCreationTimestamp="2026-02-19 13:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:09:10.354807163 +0000 UTC m=+1360.750325931" watchObservedRunningTime="2026-02-19 13:09:10.362444297 +0000 UTC m=+1360.757963075" Feb 19 13:09:15 crc kubenswrapper[4833]: I0219 13:09:15.744486 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:09:15 crc kubenswrapper[4833]: I0219 13:09:15.745241 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:09:17 crc kubenswrapper[4833]: I0219 13:09:17.786847 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-lg95r" Feb 19 13:09:17 crc kubenswrapper[4833]: I0219 13:09:17.869583 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-67z7k"] Feb 19 13:09:17 crc kubenswrapper[4833]: I0219 13:09:17.869816 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-67z7k" podUID="c778bac9-060c-4348-b5e7-9844f37733a7" containerName="dnsmasq-dns" containerID="cri-o://61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4" gracePeriod=10 Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.289318 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.422032 4833 generic.go:334] "Generic (PLEG): container finished" podID="c778bac9-060c-4348-b5e7-9844f37733a7" containerID="61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4" exitCode=0 Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.422081 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-67z7k" event={"ID":"c778bac9-060c-4348-b5e7-9844f37733a7","Type":"ContainerDied","Data":"61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4"} Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.422134 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-67z7k" event={"ID":"c778bac9-060c-4348-b5e7-9844f37733a7","Type":"ContainerDied","Data":"ec53fada7535a8f0c8446c5387bb053ac06cfd22cacc1ca5fe319b1dcde54dca"} Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.422151 4833 scope.go:117] "RemoveContainer" containerID="61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.422099 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-67z7k" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.438511 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-svc\") pod \"c778bac9-060c-4348-b5e7-9844f37733a7\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.438573 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fczkg\" (UniqueName: \"kubernetes.io/projected/c778bac9-060c-4348-b5e7-9844f37733a7-kube-api-access-fczkg\") pod \"c778bac9-060c-4348-b5e7-9844f37733a7\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.438892 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-openstack-edpm-ipam\") pod \"c778bac9-060c-4348-b5e7-9844f37733a7\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.438944 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-sb\") pod \"c778bac9-060c-4348-b5e7-9844f37733a7\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.438996 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-config\") pod \"c778bac9-060c-4348-b5e7-9844f37733a7\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.439015 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-nb\") pod \"c778bac9-060c-4348-b5e7-9844f37733a7\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.439068 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-swift-storage-0\") pod \"c778bac9-060c-4348-b5e7-9844f37733a7\" (UID: \"c778bac9-060c-4348-b5e7-9844f37733a7\") " Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.445209 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c778bac9-060c-4348-b5e7-9844f37733a7-kube-api-access-fczkg" (OuterVolumeSpecName: "kube-api-access-fczkg") pod "c778bac9-060c-4348-b5e7-9844f37733a7" (UID: "c778bac9-060c-4348-b5e7-9844f37733a7"). InnerVolumeSpecName "kube-api-access-fczkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.479438 4833 scope.go:117] "RemoveContainer" containerID="97fc23b45a48c42b5c1fd8ce6b941ad6653fcaa1c37e2562366d97f40faa13a7" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.500730 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c778bac9-060c-4348-b5e7-9844f37733a7" (UID: "c778bac9-060c-4348-b5e7-9844f37733a7"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.502316 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c778bac9-060c-4348-b5e7-9844f37733a7" (UID: "c778bac9-060c-4348-b5e7-9844f37733a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.510727 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c778bac9-060c-4348-b5e7-9844f37733a7" (UID: "c778bac9-060c-4348-b5e7-9844f37733a7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.515791 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c778bac9-060c-4348-b5e7-9844f37733a7" (UID: "c778bac9-060c-4348-b5e7-9844f37733a7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.527515 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-config" (OuterVolumeSpecName: "config") pod "c778bac9-060c-4348-b5e7-9844f37733a7" (UID: "c778bac9-060c-4348-b5e7-9844f37733a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.531288 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c778bac9-060c-4348-b5e7-9844f37733a7" (UID: "c778bac9-060c-4348-b5e7-9844f37733a7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.541537 4833 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.541568 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fczkg\" (UniqueName: \"kubernetes.io/projected/c778bac9-060c-4348-b5e7-9844f37733a7-kube-api-access-fczkg\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.541579 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.541589 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.541598 4833 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.541605 4833 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.541614 4833 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c778bac9-060c-4348-b5e7-9844f37733a7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.575399 4833 scope.go:117] "RemoveContainer" containerID="61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4" Feb 19 13:09:18 crc kubenswrapper[4833]: E0219 13:09:18.576429 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4\": container with ID starting with 61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4 not found: ID does not exist" containerID="61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.576460 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4"} err="failed to get container status \"61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4\": rpc error: code = NotFound desc = could not find container \"61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4\": container with ID starting with 61bec1619ecaf7f40158afaf275f7667acc894dffe8eace66f18b354494157b4 not found: ID does not exist" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.576479 4833 scope.go:117] "RemoveContainer" containerID="97fc23b45a48c42b5c1fd8ce6b941ad6653fcaa1c37e2562366d97f40faa13a7" Feb 19 13:09:18 crc kubenswrapper[4833]: E0219 13:09:18.576968 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97fc23b45a48c42b5c1fd8ce6b941ad6653fcaa1c37e2562366d97f40faa13a7\": container with ID starting with 97fc23b45a48c42b5c1fd8ce6b941ad6653fcaa1c37e2562366d97f40faa13a7 not found: ID does not exist" containerID="97fc23b45a48c42b5c1fd8ce6b941ad6653fcaa1c37e2562366d97f40faa13a7" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.577051 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97fc23b45a48c42b5c1fd8ce6b941ad6653fcaa1c37e2562366d97f40faa13a7"} err="failed to get container status \"97fc23b45a48c42b5c1fd8ce6b941ad6653fcaa1c37e2562366d97f40faa13a7\": rpc error: code = NotFound desc = could not find container \"97fc23b45a48c42b5c1fd8ce6b941ad6653fcaa1c37e2562366d97f40faa13a7\": container with ID starting with 97fc23b45a48c42b5c1fd8ce6b941ad6653fcaa1c37e2562366d97f40faa13a7 not found: ID does not exist" Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.766904 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-67z7k"] Feb 19 13:09:18 crc kubenswrapper[4833]: I0219 13:09:18.783430 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-67z7k"] Feb 19 13:09:20 crc kubenswrapper[4833]: I0219 13:09:20.348783 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c778bac9-060c-4348-b5e7-9844f37733a7" path="/var/lib/kubelet/pods/c778bac9-060c-4348-b5e7-9844f37733a7/volumes" Feb 19 13:09:29 crc kubenswrapper[4833]: I0219 13:09:29.556850 4833 generic.go:334] "Generic (PLEG): container finished" podID="95192227-96aa-4fa8-a7db-89f31efb056c" containerID="5b8c1ed2bf5250d91a5a243092a3d937eb69a937f85b9687767cce5249c49584" exitCode=0 Feb 19 13:09:29 crc kubenswrapper[4833]: I0219 13:09:29.556954 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"95192227-96aa-4fa8-a7db-89f31efb056c","Type":"ContainerDied","Data":"5b8c1ed2bf5250d91a5a243092a3d937eb69a937f85b9687767cce5249c49584"} Feb 19 13:09:29 crc kubenswrapper[4833]: I0219 13:09:29.559958 4833 generic.go:334] "Generic (PLEG): container finished" podID="52361cb4-eea4-49c7-b06b-acbe0ad24450" containerID="539f296eaafc758b0025440b70a433516efb09971624cb3848dfa3b78750848d" exitCode=0 Feb 19 13:09:29 crc kubenswrapper[4833]: I0219 13:09:29.560028 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52361cb4-eea4-49c7-b06b-acbe0ad24450","Type":"ContainerDied","Data":"539f296eaafc758b0025440b70a433516efb09971624cb3848dfa3b78750848d"} Feb 19 13:09:30 crc kubenswrapper[4833]: I0219 13:09:30.570205 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"52361cb4-eea4-49c7-b06b-acbe0ad24450","Type":"ContainerStarted","Data":"df8ce6674c0c3b2dee17f45bd0adb865d1bf739ef6e7ec9e41f5271b96484980"} Feb 19 13:09:30 crc kubenswrapper[4833]: I0219 13:09:30.570812 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:09:30 crc kubenswrapper[4833]: I0219 13:09:30.572057 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"95192227-96aa-4fa8-a7db-89f31efb056c","Type":"ContainerStarted","Data":"b8b52f6dbdaea9c664aa2ee5aca508685b7c5ed206e2664471cd483bf6de6a0d"} Feb 19 13:09:30 crc kubenswrapper[4833]: I0219 13:09:30.572287 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 19 13:09:30 crc kubenswrapper[4833]: I0219 13:09:30.603379 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.603343772 podStartE2EDuration="36.603343772s" podCreationTimestamp="2026-02-19 13:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:09:30.592894546 +0000 UTC m=+1380.988413354" watchObservedRunningTime="2026-02-19 13:09:30.603343772 +0000 UTC m=+1380.998862550" Feb 19 13:09:30 crc kubenswrapper[4833]: I0219 13:09:30.627660 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.627635393 podStartE2EDuration="36.627635393s" podCreationTimestamp="2026-02-19 13:08:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:09:30.620529215 +0000 UTC m=+1381.016047993" watchObservedRunningTime="2026-02-19 13:09:30.627635393 +0000 UTC m=+1381.023154181" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.180094 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt"] Feb 19 13:09:31 crc kubenswrapper[4833]: E0219 13:09:31.180570 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" containerName="dnsmasq-dns" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.180603 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" containerName="dnsmasq-dns" Feb 19 13:09:31 crc kubenswrapper[4833]: E0219 13:09:31.180629 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" containerName="init" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.180638 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" containerName="init" Feb 19 13:09:31 crc kubenswrapper[4833]: E0219 13:09:31.180657 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c778bac9-060c-4348-b5e7-9844f37733a7" containerName="dnsmasq-dns" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.180664 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c778bac9-060c-4348-b5e7-9844f37733a7" containerName="dnsmasq-dns" Feb 19 13:09:31 crc kubenswrapper[4833]: E0219 13:09:31.180681 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c778bac9-060c-4348-b5e7-9844f37733a7" containerName="init" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.180689 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c778bac9-060c-4348-b5e7-9844f37733a7" containerName="init" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.180908 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c778bac9-060c-4348-b5e7-9844f37733a7" containerName="dnsmasq-dns" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.180933 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fbd0cf0-2b11-4cf2-af96-a2ab369efe20" containerName="dnsmasq-dns" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.181663 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.194933 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt"] Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.218216 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.218527 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.218718 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.219153 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.298303 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.298386 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snlbz\" (UniqueName: \"kubernetes.io/projected/0457ceaa-c998-49db-bfa7-f837bf684537-kube-api-access-snlbz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.298653 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.298863 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.400640 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.400775 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.401844 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snlbz\" (UniqueName: \"kubernetes.io/projected/0457ceaa-c998-49db-bfa7-f837bf684537-kube-api-access-snlbz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.402433 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.407149 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.408266 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.413145 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.427726 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snlbz\" (UniqueName: \"kubernetes.io/projected/0457ceaa-c998-49db-bfa7-f837bf684537-kube-api-access-snlbz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:31 crc kubenswrapper[4833]: I0219 13:09:31.536690 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:32 crc kubenswrapper[4833]: I0219 13:09:32.106600 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt"] Feb 19 13:09:32 crc kubenswrapper[4833]: W0219 13:09:32.116700 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0457ceaa_c998_49db_bfa7_f837bf684537.slice/crio-e24c5cb0c3e72332258abe400a97d7e4f4de13ac7658d9c87100adc1648024d8 WatchSource:0}: Error finding container e24c5cb0c3e72332258abe400a97d7e4f4de13ac7658d9c87100adc1648024d8: Status 404 returned error can't find the container with id e24c5cb0c3e72332258abe400a97d7e4f4de13ac7658d9c87100adc1648024d8 Feb 19 13:09:32 crc kubenswrapper[4833]: I0219 13:09:32.118786 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:09:32 crc kubenswrapper[4833]: I0219 13:09:32.590951 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" event={"ID":"0457ceaa-c998-49db-bfa7-f837bf684537","Type":"ContainerStarted","Data":"e24c5cb0c3e72332258abe400a97d7e4f4de13ac7658d9c87100adc1648024d8"} Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.310195 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5qwmc"] Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.313947 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.324441 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qwmc"] Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.393250 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-catalog-content\") pod \"redhat-operators-5qwmc\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.393393 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hl4n\" (UniqueName: \"kubernetes.io/projected/ff0ff72d-af76-4711-9981-0790191376c3-kube-api-access-5hl4n\") pod \"redhat-operators-5qwmc\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.393546 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-utilities\") pod \"redhat-operators-5qwmc\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.502071 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-catalog-content\") pod \"redhat-operators-5qwmc\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.502151 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hl4n\" (UniqueName: \"kubernetes.io/projected/ff0ff72d-af76-4711-9981-0790191376c3-kube-api-access-5hl4n\") pod \"redhat-operators-5qwmc\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.502216 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-utilities\") pod \"redhat-operators-5qwmc\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.503020 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-utilities\") pod \"redhat-operators-5qwmc\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.503740 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-catalog-content\") pod \"redhat-operators-5qwmc\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.523235 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hl4n\" (UniqueName: \"kubernetes.io/projected/ff0ff72d-af76-4711-9981-0790191376c3-kube-api-access-5hl4n\") pod \"redhat-operators-5qwmc\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.670306 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.670422 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" event={"ID":"0457ceaa-c998-49db-bfa7-f837bf684537","Type":"ContainerStarted","Data":"cd24eb6f53d751c04976e1044649a870c52706a4fd6c6943228c118151340572"} Feb 19 13:09:41 crc kubenswrapper[4833]: I0219 13:09:41.691547 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" podStartSLOduration=2.275959922 podStartE2EDuration="10.691531591s" podCreationTimestamp="2026-02-19 13:09:31 +0000 UTC" firstStartedPulling="2026-02-19 13:09:32.11850856 +0000 UTC m=+1382.514027338" lastFinishedPulling="2026-02-19 13:09:40.534080239 +0000 UTC m=+1390.929599007" observedRunningTime="2026-02-19 13:09:41.687003002 +0000 UTC m=+1392.082521780" watchObservedRunningTime="2026-02-19 13:09:41.691531591 +0000 UTC m=+1392.087050359" Feb 19 13:09:42 crc kubenswrapper[4833]: I0219 13:09:42.425264 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5qwmc"] Feb 19 13:09:42 crc kubenswrapper[4833]: W0219 13:09:42.435845 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff0ff72d_af76_4711_9981_0790191376c3.slice/crio-4aa04a8e7ebd6c4c9532c385e4f38da3ed111c77166c45c46c6e8e4e115bf17a WatchSource:0}: Error finding container 4aa04a8e7ebd6c4c9532c385e4f38da3ed111c77166c45c46c6e8e4e115bf17a: Status 404 returned error can't find the container with id 4aa04a8e7ebd6c4c9532c385e4f38da3ed111c77166c45c46c6e8e4e115bf17a Feb 19 13:09:42 crc kubenswrapper[4833]: I0219 13:09:42.681729 4833 generic.go:334] "Generic (PLEG): container finished" podID="ff0ff72d-af76-4711-9981-0790191376c3" containerID="9e04c6a9d9cfc569a4f148e4960aa861ab404ef26aa1d7ff1435517222c8dcf9" exitCode=0 Feb 19 13:09:42 crc kubenswrapper[4833]: I0219 13:09:42.681884 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qwmc" event={"ID":"ff0ff72d-af76-4711-9981-0790191376c3","Type":"ContainerDied","Data":"9e04c6a9d9cfc569a4f148e4960aa861ab404ef26aa1d7ff1435517222c8dcf9"} Feb 19 13:09:42 crc kubenswrapper[4833]: I0219 13:09:42.681969 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qwmc" event={"ID":"ff0ff72d-af76-4711-9981-0790191376c3","Type":"ContainerStarted","Data":"4aa04a8e7ebd6c4c9532c385e4f38da3ed111c77166c45c46c6e8e4e115bf17a"} Feb 19 13:09:44 crc kubenswrapper[4833]: I0219 13:09:44.592678 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 19 13:09:44 crc kubenswrapper[4833]: I0219 13:09:44.608785 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 19 13:09:44 crc kubenswrapper[4833]: I0219 13:09:44.706002 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qwmc" event={"ID":"ff0ff72d-af76-4711-9981-0790191376c3","Type":"ContainerStarted","Data":"6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c"} Feb 19 13:09:45 crc kubenswrapper[4833]: I0219 13:09:45.720435 4833 generic.go:334] "Generic (PLEG): container finished" podID="ff0ff72d-af76-4711-9981-0790191376c3" containerID="6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c" exitCode=0 Feb 19 13:09:45 crc kubenswrapper[4833]: I0219 13:09:45.720545 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qwmc" event={"ID":"ff0ff72d-af76-4711-9981-0790191376c3","Type":"ContainerDied","Data":"6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c"} Feb 19 13:09:45 crc kubenswrapper[4833]: I0219 13:09:45.744233 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:09:45 crc kubenswrapper[4833]: I0219 13:09:45.744315 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:09:45 crc kubenswrapper[4833]: I0219 13:09:45.799670 4833 scope.go:117] "RemoveContainer" containerID="dba3d1413758072442d9ddfac05eb89afbc310bb7af8b791a2712c2c48b11986" Feb 19 13:09:46 crc kubenswrapper[4833]: I0219 13:09:46.736595 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qwmc" event={"ID":"ff0ff72d-af76-4711-9981-0790191376c3","Type":"ContainerStarted","Data":"882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec"} Feb 19 13:09:46 crc kubenswrapper[4833]: I0219 13:09:46.759287 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5qwmc" podStartSLOduration=1.9442973970000001 podStartE2EDuration="5.759268019s" podCreationTimestamp="2026-02-19 13:09:41 +0000 UTC" firstStartedPulling="2026-02-19 13:09:42.68359232 +0000 UTC m=+1393.079111088" lastFinishedPulling="2026-02-19 13:09:46.498562922 +0000 UTC m=+1396.894081710" observedRunningTime="2026-02-19 13:09:46.750061746 +0000 UTC m=+1397.145580554" watchObservedRunningTime="2026-02-19 13:09:46.759268019 +0000 UTC m=+1397.154786787" Feb 19 13:09:51 crc kubenswrapper[4833]: I0219 13:09:51.671646 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:51 crc kubenswrapper[4833]: I0219 13:09:51.672172 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:09:52 crc kubenswrapper[4833]: I0219 13:09:52.736975 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5qwmc" podUID="ff0ff72d-af76-4711-9981-0790191376c3" containerName="registry-server" probeResult="failure" output=< Feb 19 13:09:52 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 19 13:09:52 crc kubenswrapper[4833]: > Feb 19 13:09:52 crc kubenswrapper[4833]: I0219 13:09:52.805259 4833 generic.go:334] "Generic (PLEG): container finished" podID="0457ceaa-c998-49db-bfa7-f837bf684537" containerID="cd24eb6f53d751c04976e1044649a870c52706a4fd6c6943228c118151340572" exitCode=0 Feb 19 13:09:52 crc kubenswrapper[4833]: I0219 13:09:52.805311 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" event={"ID":"0457ceaa-c998-49db-bfa7-f837bf684537","Type":"ContainerDied","Data":"cd24eb6f53d751c04976e1044649a870c52706a4fd6c6943228c118151340572"} Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.344619 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.483214 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-repo-setup-combined-ca-bundle\") pod \"0457ceaa-c998-49db-bfa7-f837bf684537\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.483361 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-inventory\") pod \"0457ceaa-c998-49db-bfa7-f837bf684537\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.483615 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snlbz\" (UniqueName: \"kubernetes.io/projected/0457ceaa-c998-49db-bfa7-f837bf684537-kube-api-access-snlbz\") pod \"0457ceaa-c998-49db-bfa7-f837bf684537\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.483658 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-ssh-key-openstack-edpm-ipam\") pod \"0457ceaa-c998-49db-bfa7-f837bf684537\" (UID: \"0457ceaa-c998-49db-bfa7-f837bf684537\") " Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.500801 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0457ceaa-c998-49db-bfa7-f837bf684537" (UID: "0457ceaa-c998-49db-bfa7-f837bf684537"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.512650 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0457ceaa-c998-49db-bfa7-f837bf684537-kube-api-access-snlbz" (OuterVolumeSpecName: "kube-api-access-snlbz") pod "0457ceaa-c998-49db-bfa7-f837bf684537" (UID: "0457ceaa-c998-49db-bfa7-f837bf684537"). InnerVolumeSpecName "kube-api-access-snlbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.544465 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0457ceaa-c998-49db-bfa7-f837bf684537" (UID: "0457ceaa-c998-49db-bfa7-f837bf684537"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.550804 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-inventory" (OuterVolumeSpecName: "inventory") pod "0457ceaa-c998-49db-bfa7-f837bf684537" (UID: "0457ceaa-c998-49db-bfa7-f837bf684537"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.585424 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snlbz\" (UniqueName: \"kubernetes.io/projected/0457ceaa-c998-49db-bfa7-f837bf684537-kube-api-access-snlbz\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.585462 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.585479 4833 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.585491 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0457ceaa-c998-49db-bfa7-f837bf684537-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.825157 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" event={"ID":"0457ceaa-c998-49db-bfa7-f837bf684537","Type":"ContainerDied","Data":"e24c5cb0c3e72332258abe400a97d7e4f4de13ac7658d9c87100adc1648024d8"} Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.825208 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e24c5cb0c3e72332258abe400a97d7e4f4de13ac7658d9c87100adc1648024d8" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.825222 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.914231 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk"] Feb 19 13:09:54 crc kubenswrapper[4833]: E0219 13:09:54.915231 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0457ceaa-c998-49db-bfa7-f837bf684537" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.915257 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="0457ceaa-c998-49db-bfa7-f837bf684537" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.915512 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="0457ceaa-c998-49db-bfa7-f837bf684537" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.916284 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.918321 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.918848 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.919136 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.923012 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:09:54 crc kubenswrapper[4833]: I0219 13:09:54.927292 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk"] Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.095343 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgbkd\" (UniqueName: \"kubernetes.io/projected/810a4b4a-798a-4dbc-9f86-81377c37d104-kube-api-access-mgbkd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xnzhk\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.095578 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xnzhk\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.095658 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xnzhk\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.197749 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgbkd\" (UniqueName: \"kubernetes.io/projected/810a4b4a-798a-4dbc-9f86-81377c37d104-kube-api-access-mgbkd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xnzhk\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.197862 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xnzhk\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.197919 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xnzhk\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.203920 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xnzhk\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.203997 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xnzhk\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.220413 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgbkd\" (UniqueName: \"kubernetes.io/projected/810a4b4a-798a-4dbc-9f86-81377c37d104-kube-api-access-mgbkd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xnzhk\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.237646 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.802395 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk"] Feb 19 13:09:55 crc kubenswrapper[4833]: W0219 13:09:55.809436 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod810a4b4a_798a_4dbc_9f86_81377c37d104.slice/crio-b88322a36917080a928883fd49240fdabc2fe48d4c1f42229095e39993f36fdb WatchSource:0}: Error finding container b88322a36917080a928883fd49240fdabc2fe48d4c1f42229095e39993f36fdb: Status 404 returned error can't find the container with id b88322a36917080a928883fd49240fdabc2fe48d4c1f42229095e39993f36fdb Feb 19 13:09:55 crc kubenswrapper[4833]: I0219 13:09:55.837990 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" event={"ID":"810a4b4a-798a-4dbc-9f86-81377c37d104","Type":"ContainerStarted","Data":"b88322a36917080a928883fd49240fdabc2fe48d4c1f42229095e39993f36fdb"} Feb 19 13:09:56 crc kubenswrapper[4833]: I0219 13:09:56.852279 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" event={"ID":"810a4b4a-798a-4dbc-9f86-81377c37d104","Type":"ContainerStarted","Data":"1844e96718db71ac4c0308641994384fe25292081c5fdce3319c99de87b9ca19"} Feb 19 13:09:56 crc kubenswrapper[4833]: I0219 13:09:56.872414 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" podStartSLOduration=2.165330827 podStartE2EDuration="2.872390609s" podCreationTimestamp="2026-02-19 13:09:54 +0000 UTC" firstStartedPulling="2026-02-19 13:09:55.811121284 +0000 UTC m=+1406.206640062" lastFinishedPulling="2026-02-19 13:09:56.518181076 +0000 UTC m=+1406.913699844" observedRunningTime="2026-02-19 13:09:56.871364662 +0000 UTC m=+1407.266883440" watchObservedRunningTime="2026-02-19 13:09:56.872390609 +0000 UTC m=+1407.267909397" Feb 19 13:09:59 crc kubenswrapper[4833]: I0219 13:09:59.885390 4833 generic.go:334] "Generic (PLEG): container finished" podID="810a4b4a-798a-4dbc-9f86-81377c37d104" containerID="1844e96718db71ac4c0308641994384fe25292081c5fdce3319c99de87b9ca19" exitCode=0 Feb 19 13:09:59 crc kubenswrapper[4833]: I0219 13:09:59.885534 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" event={"ID":"810a4b4a-798a-4dbc-9f86-81377c37d104","Type":"ContainerDied","Data":"1844e96718db71ac4c0308641994384fe25292081c5fdce3319c99de87b9ca19"} Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.382716 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.533789 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-ssh-key-openstack-edpm-ipam\") pod \"810a4b4a-798a-4dbc-9f86-81377c37d104\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.533936 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-inventory\") pod \"810a4b4a-798a-4dbc-9f86-81377c37d104\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.534049 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgbkd\" (UniqueName: \"kubernetes.io/projected/810a4b4a-798a-4dbc-9f86-81377c37d104-kube-api-access-mgbkd\") pod \"810a4b4a-798a-4dbc-9f86-81377c37d104\" (UID: \"810a4b4a-798a-4dbc-9f86-81377c37d104\") " Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.543437 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810a4b4a-798a-4dbc-9f86-81377c37d104-kube-api-access-mgbkd" (OuterVolumeSpecName: "kube-api-access-mgbkd") pod "810a4b4a-798a-4dbc-9f86-81377c37d104" (UID: "810a4b4a-798a-4dbc-9f86-81377c37d104"). InnerVolumeSpecName "kube-api-access-mgbkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.575631 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "810a4b4a-798a-4dbc-9f86-81377c37d104" (UID: "810a4b4a-798a-4dbc-9f86-81377c37d104"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.576384 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-inventory" (OuterVolumeSpecName: "inventory") pod "810a4b4a-798a-4dbc-9f86-81377c37d104" (UID: "810a4b4a-798a-4dbc-9f86-81377c37d104"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.636646 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.636734 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/810a4b4a-798a-4dbc-9f86-81377c37d104-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.636753 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgbkd\" (UniqueName: \"kubernetes.io/projected/810a4b4a-798a-4dbc-9f86-81377c37d104-kube-api-access-mgbkd\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.729957 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.790623 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.911122 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" event={"ID":"810a4b4a-798a-4dbc-9f86-81377c37d104","Type":"ContainerDied","Data":"b88322a36917080a928883fd49240fdabc2fe48d4c1f42229095e39993f36fdb"} Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.911183 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xnzhk" Feb 19 13:10:01 crc kubenswrapper[4833]: I0219 13:10:01.911193 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b88322a36917080a928883fd49240fdabc2fe48d4c1f42229095e39993f36fdb" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.001216 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qwmc"] Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.036320 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp"] Feb 19 13:10:02 crc kubenswrapper[4833]: E0219 13:10:02.036829 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="810a4b4a-798a-4dbc-9f86-81377c37d104" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.036851 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="810a4b4a-798a-4dbc-9f86-81377c37d104" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.037124 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="810a4b4a-798a-4dbc-9f86-81377c37d104" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.038023 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.039708 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.039707 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.040230 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.040561 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.045132 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp"] Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.149810 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.150051 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.150143 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.150349 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55xlz\" (UniqueName: \"kubernetes.io/projected/71fc87a7-2568-481c-a841-6500a69ba8b9-kube-api-access-55xlz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.252654 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55xlz\" (UniqueName: \"kubernetes.io/projected/71fc87a7-2568-481c-a841-6500a69ba8b9-kube-api-access-55xlz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.252808 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.252836 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.252882 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.259487 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.261116 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.265695 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.283551 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55xlz\" (UniqueName: \"kubernetes.io/projected/71fc87a7-2568-481c-a841-6500a69ba8b9-kube-api-access-55xlz\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.363320 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.886464 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp"] Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.920412 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" event={"ID":"71fc87a7-2568-481c-a841-6500a69ba8b9","Type":"ContainerStarted","Data":"8a9d1aec3edcc4ef2e0d840b48cc15d81edf16dc86156b770093fe74b2bafb4c"} Feb 19 13:10:02 crc kubenswrapper[4833]: I0219 13:10:02.920650 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5qwmc" podUID="ff0ff72d-af76-4711-9981-0790191376c3" containerName="registry-server" containerID="cri-o://882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec" gracePeriod=2 Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.502662 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.588623 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-catalog-content\") pod \"ff0ff72d-af76-4711-9981-0790191376c3\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.588740 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-utilities\") pod \"ff0ff72d-af76-4711-9981-0790191376c3\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.588841 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hl4n\" (UniqueName: \"kubernetes.io/projected/ff0ff72d-af76-4711-9981-0790191376c3-kube-api-access-5hl4n\") pod \"ff0ff72d-af76-4711-9981-0790191376c3\" (UID: \"ff0ff72d-af76-4711-9981-0790191376c3\") " Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.590011 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-utilities" (OuterVolumeSpecName: "utilities") pod "ff0ff72d-af76-4711-9981-0790191376c3" (UID: "ff0ff72d-af76-4711-9981-0790191376c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.595963 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0ff72d-af76-4711-9981-0790191376c3-kube-api-access-5hl4n" (OuterVolumeSpecName: "kube-api-access-5hl4n") pod "ff0ff72d-af76-4711-9981-0790191376c3" (UID: "ff0ff72d-af76-4711-9981-0790191376c3"). InnerVolumeSpecName "kube-api-access-5hl4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.691230 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.691798 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hl4n\" (UniqueName: \"kubernetes.io/projected/ff0ff72d-af76-4711-9981-0790191376c3-kube-api-access-5hl4n\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.727680 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff0ff72d-af76-4711-9981-0790191376c3" (UID: "ff0ff72d-af76-4711-9981-0790191376c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.793120 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff0ff72d-af76-4711-9981-0790191376c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.937982 4833 generic.go:334] "Generic (PLEG): container finished" podID="ff0ff72d-af76-4711-9981-0790191376c3" containerID="882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec" exitCode=0 Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.938049 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qwmc" event={"ID":"ff0ff72d-af76-4711-9981-0790191376c3","Type":"ContainerDied","Data":"882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec"} Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.938068 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5qwmc" Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.938088 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5qwmc" event={"ID":"ff0ff72d-af76-4711-9981-0790191376c3","Type":"ContainerDied","Data":"4aa04a8e7ebd6c4c9532c385e4f38da3ed111c77166c45c46c6e8e4e115bf17a"} Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.938106 4833 scope.go:117] "RemoveContainer" containerID="882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec" Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.951754 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" event={"ID":"71fc87a7-2568-481c-a841-6500a69ba8b9","Type":"ContainerStarted","Data":"6539bba5763c60c9db39f5b3a1fbd7702ee2d910a033f0096496453cd7fa7f52"} Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.976794 4833 scope.go:117] "RemoveContainer" containerID="6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c" Feb 19 13:10:03 crc kubenswrapper[4833]: I0219 13:10:03.981959 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" podStartSLOduration=2.528458296 podStartE2EDuration="2.981931188s" podCreationTimestamp="2026-02-19 13:10:01 +0000 UTC" firstStartedPulling="2026-02-19 13:10:02.88795951 +0000 UTC m=+1413.283478278" lastFinishedPulling="2026-02-19 13:10:03.341432402 +0000 UTC m=+1413.736951170" observedRunningTime="2026-02-19 13:10:03.969975052 +0000 UTC m=+1414.365493820" watchObservedRunningTime="2026-02-19 13:10:03.981931188 +0000 UTC m=+1414.377449966" Feb 19 13:10:04 crc kubenswrapper[4833]: I0219 13:10:04.005127 4833 scope.go:117] "RemoveContainer" containerID="9e04c6a9d9cfc569a4f148e4960aa861ab404ef26aa1d7ff1435517222c8dcf9" Feb 19 13:10:04 crc kubenswrapper[4833]: I0219 13:10:04.010436 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5qwmc"] Feb 19 13:10:04 crc kubenswrapper[4833]: I0219 13:10:04.022565 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5qwmc"] Feb 19 13:10:04 crc kubenswrapper[4833]: I0219 13:10:04.044423 4833 scope.go:117] "RemoveContainer" containerID="882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec" Feb 19 13:10:04 crc kubenswrapper[4833]: E0219 13:10:04.044864 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec\": container with ID starting with 882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec not found: ID does not exist" containerID="882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec" Feb 19 13:10:04 crc kubenswrapper[4833]: I0219 13:10:04.044922 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec"} err="failed to get container status \"882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec\": rpc error: code = NotFound desc = could not find container \"882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec\": container with ID starting with 882f7a3936963fbf5acfe08354397ff8005060279ea3e1cd628540b21edbb7ec not found: ID does not exist" Feb 19 13:10:04 crc kubenswrapper[4833]: I0219 13:10:04.044954 4833 scope.go:117] "RemoveContainer" containerID="6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c" Feb 19 13:10:04 crc kubenswrapper[4833]: E0219 13:10:04.045261 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c\": container with ID starting with 6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c not found: ID does not exist" containerID="6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c" Feb 19 13:10:04 crc kubenswrapper[4833]: I0219 13:10:04.045291 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c"} err="failed to get container status \"6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c\": rpc error: code = NotFound desc = could not find container \"6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c\": container with ID starting with 6d4d99c9fb226cd77f114839770c92d17dc9960f4364a0bba089cd63f691a18c not found: ID does not exist" Feb 19 13:10:04 crc kubenswrapper[4833]: I0219 13:10:04.045314 4833 scope.go:117] "RemoveContainer" containerID="9e04c6a9d9cfc569a4f148e4960aa861ab404ef26aa1d7ff1435517222c8dcf9" Feb 19 13:10:04 crc kubenswrapper[4833]: E0219 13:10:04.045565 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e04c6a9d9cfc569a4f148e4960aa861ab404ef26aa1d7ff1435517222c8dcf9\": container with ID starting with 9e04c6a9d9cfc569a4f148e4960aa861ab404ef26aa1d7ff1435517222c8dcf9 not found: ID does not exist" containerID="9e04c6a9d9cfc569a4f148e4960aa861ab404ef26aa1d7ff1435517222c8dcf9" Feb 19 13:10:04 crc kubenswrapper[4833]: I0219 13:10:04.045597 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e04c6a9d9cfc569a4f148e4960aa861ab404ef26aa1d7ff1435517222c8dcf9"} err="failed to get container status \"9e04c6a9d9cfc569a4f148e4960aa861ab404ef26aa1d7ff1435517222c8dcf9\": rpc error: code = NotFound desc = could not find container \"9e04c6a9d9cfc569a4f148e4960aa861ab404ef26aa1d7ff1435517222c8dcf9\": container with ID starting with 9e04c6a9d9cfc569a4f148e4960aa861ab404ef26aa1d7ff1435517222c8dcf9 not found: ID does not exist" Feb 19 13:10:04 crc kubenswrapper[4833]: I0219 13:10:04.329397 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0ff72d-af76-4711-9981-0790191376c3" path="/var/lib/kubelet/pods/ff0ff72d-af76-4711-9981-0790191376c3/volumes" Feb 19 13:10:15 crc kubenswrapper[4833]: I0219 13:10:15.745451 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:10:15 crc kubenswrapper[4833]: I0219 13:10:15.745973 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:10:15 crc kubenswrapper[4833]: I0219 13:10:15.746019 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 13:10:15 crc kubenswrapper[4833]: I0219 13:10:15.746444 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44cd4d92890c7506a1edce4407a60145e4dd4d2e3ac145ff2d3b775c7a0f6b00"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:10:15 crc kubenswrapper[4833]: I0219 13:10:15.746547 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://44cd4d92890c7506a1edce4407a60145e4dd4d2e3ac145ff2d3b775c7a0f6b00" gracePeriod=600 Feb 19 13:10:16 crc kubenswrapper[4833]: I0219 13:10:16.075600 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="44cd4d92890c7506a1edce4407a60145e4dd4d2e3ac145ff2d3b775c7a0f6b00" exitCode=0 Feb 19 13:10:16 crc kubenswrapper[4833]: I0219 13:10:16.075700 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"44cd4d92890c7506a1edce4407a60145e4dd4d2e3ac145ff2d3b775c7a0f6b00"} Feb 19 13:10:16 crc kubenswrapper[4833]: I0219 13:10:16.075974 4833 scope.go:117] "RemoveContainer" containerID="79901fa015c98a89f8eb5d748d58a779eb4aed74d086040cca560575f94233a9" Feb 19 13:10:17 crc kubenswrapper[4833]: I0219 13:10:17.088349 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483"} Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.355738 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6867r"] Feb 19 13:10:18 crc kubenswrapper[4833]: E0219 13:10:18.356676 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0ff72d-af76-4711-9981-0790191376c3" containerName="extract-content" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.356705 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0ff72d-af76-4711-9981-0790191376c3" containerName="extract-content" Feb 19 13:10:18 crc kubenswrapper[4833]: E0219 13:10:18.356741 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0ff72d-af76-4711-9981-0790191376c3" containerName="registry-server" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.356750 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0ff72d-af76-4711-9981-0790191376c3" containerName="registry-server" Feb 19 13:10:18 crc kubenswrapper[4833]: E0219 13:10:18.356781 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0ff72d-af76-4711-9981-0790191376c3" containerName="extract-utilities" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.356791 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0ff72d-af76-4711-9981-0790191376c3" containerName="extract-utilities" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.357130 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0ff72d-af76-4711-9981-0790191376c3" containerName="registry-server" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.358844 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.380436 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6867r"] Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.480698 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm79n\" (UniqueName: \"kubernetes.io/projected/45f69f79-f51a-4cf2-a390-3a5da70945ca-kube-api-access-vm79n\") pod \"community-operators-6867r\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.481027 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-utilities\") pod \"community-operators-6867r\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.481162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-catalog-content\") pod \"community-operators-6867r\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.583247 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-utilities\") pod \"community-operators-6867r\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.583308 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-catalog-content\") pod \"community-operators-6867r\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.583441 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm79n\" (UniqueName: \"kubernetes.io/projected/45f69f79-f51a-4cf2-a390-3a5da70945ca-kube-api-access-vm79n\") pod \"community-operators-6867r\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.584080 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-utilities\") pod \"community-operators-6867r\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.584282 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-catalog-content\") pod \"community-operators-6867r\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.606570 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm79n\" (UniqueName: \"kubernetes.io/projected/45f69f79-f51a-4cf2-a390-3a5da70945ca-kube-api-access-vm79n\") pod \"community-operators-6867r\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:18 crc kubenswrapper[4833]: I0219 13:10:18.688157 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:19 crc kubenswrapper[4833]: I0219 13:10:19.289973 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6867r"] Feb 19 13:10:19 crc kubenswrapper[4833]: W0219 13:10:19.295144 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45f69f79_f51a_4cf2_a390_3a5da70945ca.slice/crio-865e2f25efc7ec462fe8a3799419be8bebb16d801411756966c82de897418c7a WatchSource:0}: Error finding container 865e2f25efc7ec462fe8a3799419be8bebb16d801411756966c82de897418c7a: Status 404 returned error can't find the container with id 865e2f25efc7ec462fe8a3799419be8bebb16d801411756966c82de897418c7a Feb 19 13:10:20 crc kubenswrapper[4833]: I0219 13:10:20.117596 4833 generic.go:334] "Generic (PLEG): container finished" podID="45f69f79-f51a-4cf2-a390-3a5da70945ca" containerID="21fc1800632e449d4ddbe6e6a02a1de92fb90cbfd6b3ff9a4d78b8274f9496ca" exitCode=0 Feb 19 13:10:20 crc kubenswrapper[4833]: I0219 13:10:20.117758 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6867r" event={"ID":"45f69f79-f51a-4cf2-a390-3a5da70945ca","Type":"ContainerDied","Data":"21fc1800632e449d4ddbe6e6a02a1de92fb90cbfd6b3ff9a4d78b8274f9496ca"} Feb 19 13:10:20 crc kubenswrapper[4833]: I0219 13:10:20.117923 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6867r" event={"ID":"45f69f79-f51a-4cf2-a390-3a5da70945ca","Type":"ContainerStarted","Data":"865e2f25efc7ec462fe8a3799419be8bebb16d801411756966c82de897418c7a"} Feb 19 13:10:21 crc kubenswrapper[4833]: I0219 13:10:21.131134 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6867r" event={"ID":"45f69f79-f51a-4cf2-a390-3a5da70945ca","Type":"ContainerStarted","Data":"26d4c5f63f1eb2e1d069c7d09321a8ade9c5484acb0b65467d55fde6cc980ccd"} Feb 19 13:10:22 crc kubenswrapper[4833]: I0219 13:10:22.143068 4833 generic.go:334] "Generic (PLEG): container finished" podID="45f69f79-f51a-4cf2-a390-3a5da70945ca" containerID="26d4c5f63f1eb2e1d069c7d09321a8ade9c5484acb0b65467d55fde6cc980ccd" exitCode=0 Feb 19 13:10:22 crc kubenswrapper[4833]: I0219 13:10:22.143153 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6867r" event={"ID":"45f69f79-f51a-4cf2-a390-3a5da70945ca","Type":"ContainerDied","Data":"26d4c5f63f1eb2e1d069c7d09321a8ade9c5484acb0b65467d55fde6cc980ccd"} Feb 19 13:10:23 crc kubenswrapper[4833]: I0219 13:10:23.163139 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6867r" event={"ID":"45f69f79-f51a-4cf2-a390-3a5da70945ca","Type":"ContainerStarted","Data":"4332f818953310f1ddbea04ae08e3b80e3adf6c3987fccd0709f20ee2995c017"} Feb 19 13:10:23 crc kubenswrapper[4833]: I0219 13:10:23.200937 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6867r" podStartSLOduration=2.6895975549999998 podStartE2EDuration="5.200906966s" podCreationTimestamp="2026-02-19 13:10:18 +0000 UTC" firstStartedPulling="2026-02-19 13:10:20.120553766 +0000 UTC m=+1430.516072534" lastFinishedPulling="2026-02-19 13:10:22.631863187 +0000 UTC m=+1433.027381945" observedRunningTime="2026-02-19 13:10:23.191864669 +0000 UTC m=+1433.587383467" watchObservedRunningTime="2026-02-19 13:10:23.200906966 +0000 UTC m=+1433.596425744" Feb 19 13:10:28 crc kubenswrapper[4833]: I0219 13:10:28.689589 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:28 crc kubenswrapper[4833]: I0219 13:10:28.690687 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:28 crc kubenswrapper[4833]: I0219 13:10:28.735737 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:29 crc kubenswrapper[4833]: I0219 13:10:29.267214 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:30 crc kubenswrapper[4833]: I0219 13:10:30.761995 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6867r"] Feb 19 13:10:32 crc kubenswrapper[4833]: I0219 13:10:32.244151 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6867r" podUID="45f69f79-f51a-4cf2-a390-3a5da70945ca" containerName="registry-server" containerID="cri-o://4332f818953310f1ddbea04ae08e3b80e3adf6c3987fccd0709f20ee2995c017" gracePeriod=2 Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.255511 4833 generic.go:334] "Generic (PLEG): container finished" podID="45f69f79-f51a-4cf2-a390-3a5da70945ca" containerID="4332f818953310f1ddbea04ae08e3b80e3adf6c3987fccd0709f20ee2995c017" exitCode=0 Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.255630 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6867r" event={"ID":"45f69f79-f51a-4cf2-a390-3a5da70945ca","Type":"ContainerDied","Data":"4332f818953310f1ddbea04ae08e3b80e3adf6c3987fccd0709f20ee2995c017"} Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.337115 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.489522 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm79n\" (UniqueName: \"kubernetes.io/projected/45f69f79-f51a-4cf2-a390-3a5da70945ca-kube-api-access-vm79n\") pod \"45f69f79-f51a-4cf2-a390-3a5da70945ca\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.489606 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-catalog-content\") pod \"45f69f79-f51a-4cf2-a390-3a5da70945ca\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.489888 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-utilities\") pod \"45f69f79-f51a-4cf2-a390-3a5da70945ca\" (UID: \"45f69f79-f51a-4cf2-a390-3a5da70945ca\") " Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.490572 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-utilities" (OuterVolumeSpecName: "utilities") pod "45f69f79-f51a-4cf2-a390-3a5da70945ca" (UID: "45f69f79-f51a-4cf2-a390-3a5da70945ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.496488 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f69f79-f51a-4cf2-a390-3a5da70945ca-kube-api-access-vm79n" (OuterVolumeSpecName: "kube-api-access-vm79n") pod "45f69f79-f51a-4cf2-a390-3a5da70945ca" (UID: "45f69f79-f51a-4cf2-a390-3a5da70945ca"). InnerVolumeSpecName "kube-api-access-vm79n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.556509 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45f69f79-f51a-4cf2-a390-3a5da70945ca" (UID: "45f69f79-f51a-4cf2-a390-3a5da70945ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.592033 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.592065 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm79n\" (UniqueName: \"kubernetes.io/projected/45f69f79-f51a-4cf2-a390-3a5da70945ca-kube-api-access-vm79n\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:33 crc kubenswrapper[4833]: I0219 13:10:33.592080 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f69f79-f51a-4cf2-a390-3a5da70945ca-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:10:34 crc kubenswrapper[4833]: I0219 13:10:34.267215 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6867r" event={"ID":"45f69f79-f51a-4cf2-a390-3a5da70945ca","Type":"ContainerDied","Data":"865e2f25efc7ec462fe8a3799419be8bebb16d801411756966c82de897418c7a"} Feb 19 13:10:34 crc kubenswrapper[4833]: I0219 13:10:34.267579 4833 scope.go:117] "RemoveContainer" containerID="4332f818953310f1ddbea04ae08e3b80e3adf6c3987fccd0709f20ee2995c017" Feb 19 13:10:34 crc kubenswrapper[4833]: I0219 13:10:34.267299 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6867r" Feb 19 13:10:34 crc kubenswrapper[4833]: I0219 13:10:34.307328 4833 scope.go:117] "RemoveContainer" containerID="26d4c5f63f1eb2e1d069c7d09321a8ade9c5484acb0b65467d55fde6cc980ccd" Feb 19 13:10:34 crc kubenswrapper[4833]: I0219 13:10:34.307842 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6867r"] Feb 19 13:10:34 crc kubenswrapper[4833]: I0219 13:10:34.328087 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6867r"] Feb 19 13:10:34 crc kubenswrapper[4833]: I0219 13:10:34.331592 4833 scope.go:117] "RemoveContainer" containerID="21fc1800632e449d4ddbe6e6a02a1de92fb90cbfd6b3ff9a4d78b8274f9496ca" Feb 19 13:10:36 crc kubenswrapper[4833]: I0219 13:10:36.334762 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f69f79-f51a-4cf2-a390-3a5da70945ca" path="/var/lib/kubelet/pods/45f69f79-f51a-4cf2-a390-3a5da70945ca/volumes" Feb 19 13:10:46 crc kubenswrapper[4833]: I0219 13:10:46.064851 4833 scope.go:117] "RemoveContainer" containerID="5b49de954ade5270b73330611daee6471d79f4fd53683cf220e957c41b76dfae" Feb 19 13:10:46 crc kubenswrapper[4833]: I0219 13:10:46.100029 4833 scope.go:117] "RemoveContainer" containerID="574072301f5e178d53718c251aae57b00a945a04d898c95c03429952b49d93c2" Feb 19 13:10:46 crc kubenswrapper[4833]: I0219 13:10:46.177305 4833 scope.go:117] "RemoveContainer" containerID="24555c0362540584930eb2b0f67b08f0230f630668eeb48fbcb6a570f233da57" Feb 19 13:10:46 crc kubenswrapper[4833]: I0219 13:10:46.235879 4833 scope.go:117] "RemoveContainer" containerID="5b16bda53cfdd66206d760bc0b6f48b1ab7187858af29e37ee123f0d0f5ed21f" Feb 19 13:11:47 crc kubenswrapper[4833]: I0219 13:11:47.269084 4833 scope.go:117] "RemoveContainer" containerID="44111b677fe64a1499e00cd8977512b8520ed595020ab2f0a9dc238288b49854" Feb 19 13:11:47 crc kubenswrapper[4833]: I0219 13:11:47.313408 4833 scope.go:117] "RemoveContainer" containerID="d01869bb2a8ef1704e5da7b6e3da6a215e0b7263f86c920ad1ac737c8a3e8137" Feb 19 13:11:47 crc kubenswrapper[4833]: I0219 13:11:47.350604 4833 scope.go:117] "RemoveContainer" containerID="e7621c18f354105869f8b96b2124f21516919184cd1d3c15215fa7c4984c85fe" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.396898 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kb7qw"] Feb 19 13:12:39 crc kubenswrapper[4833]: E0219 13:12:39.398484 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f69f79-f51a-4cf2-a390-3a5da70945ca" containerName="extract-content" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.398539 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f69f79-f51a-4cf2-a390-3a5da70945ca" containerName="extract-content" Feb 19 13:12:39 crc kubenswrapper[4833]: E0219 13:12:39.398549 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f69f79-f51a-4cf2-a390-3a5da70945ca" containerName="registry-server" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.398556 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f69f79-f51a-4cf2-a390-3a5da70945ca" containerName="registry-server" Feb 19 13:12:39 crc kubenswrapper[4833]: E0219 13:12:39.398571 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f69f79-f51a-4cf2-a390-3a5da70945ca" containerName="extract-utilities" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.398577 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f69f79-f51a-4cf2-a390-3a5da70945ca" containerName="extract-utilities" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.398759 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f69f79-f51a-4cf2-a390-3a5da70945ca" containerName="registry-server" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.400171 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.422483 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kb7qw"] Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.514286 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59sb5\" (UniqueName: \"kubernetes.io/projected/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-kube-api-access-59sb5\") pod \"certified-operators-kb7qw\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.514380 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-utilities\") pod \"certified-operators-kb7qw\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.514455 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-catalog-content\") pod \"certified-operators-kb7qw\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.615790 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-catalog-content\") pod \"certified-operators-kb7qw\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.616330 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-catalog-content\") pod \"certified-operators-kb7qw\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.616458 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59sb5\" (UniqueName: \"kubernetes.io/projected/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-kube-api-access-59sb5\") pod \"certified-operators-kb7qw\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.616618 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-utilities\") pod \"certified-operators-kb7qw\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.617087 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-utilities\") pod \"certified-operators-kb7qw\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.639736 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59sb5\" (UniqueName: \"kubernetes.io/projected/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-kube-api-access-59sb5\") pod \"certified-operators-kb7qw\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:39 crc kubenswrapper[4833]: I0219 13:12:39.724729 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:40 crc kubenswrapper[4833]: I0219 13:12:40.251937 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kb7qw"] Feb 19 13:12:40 crc kubenswrapper[4833]: I0219 13:12:40.838119 4833 generic.go:334] "Generic (PLEG): container finished" podID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" containerID="3a6b6234b93ab140462708a94cb475b7c37b9c6dc35332066a311cd8c8395355" exitCode=0 Feb 19 13:12:40 crc kubenswrapper[4833]: I0219 13:12:40.838180 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kb7qw" event={"ID":"2f90363d-9c7c-4ba1-b142-977bc3d81fd9","Type":"ContainerDied","Data":"3a6b6234b93ab140462708a94cb475b7c37b9c6dc35332066a311cd8c8395355"} Feb 19 13:12:40 crc kubenswrapper[4833]: I0219 13:12:40.838215 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kb7qw" event={"ID":"2f90363d-9c7c-4ba1-b142-977bc3d81fd9","Type":"ContainerStarted","Data":"a15c2c4169cf7df96fbdd6138cb76e0351560bee633dd446a95ca6f1c6ee75a9"} Feb 19 13:12:41 crc kubenswrapper[4833]: I0219 13:12:41.849357 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kb7qw" event={"ID":"2f90363d-9c7c-4ba1-b142-977bc3d81fd9","Type":"ContainerStarted","Data":"3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed"} Feb 19 13:12:42 crc kubenswrapper[4833]: E0219 13:12:42.318959 4833 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f90363d_9c7c_4ba1_b142_977bc3d81fd9.slice/crio-conmon-3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f90363d_9c7c_4ba1_b142_977bc3d81fd9.slice/crio-3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed.scope\": RecentStats: unable to find data in memory cache]" Feb 19 13:12:42 crc kubenswrapper[4833]: I0219 13:12:42.862740 4833 generic.go:334] "Generic (PLEG): container finished" podID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" containerID="3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed" exitCode=0 Feb 19 13:12:42 crc kubenswrapper[4833]: I0219 13:12:42.862911 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kb7qw" event={"ID":"2f90363d-9c7c-4ba1-b142-977bc3d81fd9","Type":"ContainerDied","Data":"3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed"} Feb 19 13:12:43 crc kubenswrapper[4833]: I0219 13:12:43.874583 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kb7qw" event={"ID":"2f90363d-9c7c-4ba1-b142-977bc3d81fd9","Type":"ContainerStarted","Data":"f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d"} Feb 19 13:12:43 crc kubenswrapper[4833]: I0219 13:12:43.901786 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kb7qw" podStartSLOduration=2.473516565 podStartE2EDuration="4.901769181s" podCreationTimestamp="2026-02-19 13:12:39 +0000 UTC" firstStartedPulling="2026-02-19 13:12:40.850607139 +0000 UTC m=+1571.246125917" lastFinishedPulling="2026-02-19 13:12:43.278859725 +0000 UTC m=+1573.674378533" observedRunningTime="2026-02-19 13:12:43.899610685 +0000 UTC m=+1574.295129453" watchObservedRunningTime="2026-02-19 13:12:43.901769181 +0000 UTC m=+1574.297287949" Feb 19 13:12:45 crc kubenswrapper[4833]: I0219 13:12:45.744606 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:12:45 crc kubenswrapper[4833]: I0219 13:12:45.744675 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:12:49 crc kubenswrapper[4833]: I0219 13:12:49.725054 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:49 crc kubenswrapper[4833]: I0219 13:12:49.725813 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:49 crc kubenswrapper[4833]: I0219 13:12:49.780461 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:50 crc kubenswrapper[4833]: I0219 13:12:50.011504 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:50 crc kubenswrapper[4833]: I0219 13:12:50.071392 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kb7qw"] Feb 19 13:12:51 crc kubenswrapper[4833]: I0219 13:12:51.957953 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kb7qw" podUID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" containerName="registry-server" containerID="cri-o://f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d" gracePeriod=2 Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.406383 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.558942 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-utilities\") pod \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.558998 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-catalog-content\") pod \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.559072 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59sb5\" (UniqueName: \"kubernetes.io/projected/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-kube-api-access-59sb5\") pod \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\" (UID: \"2f90363d-9c7c-4ba1-b142-977bc3d81fd9\") " Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.561196 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-utilities" (OuterVolumeSpecName: "utilities") pod "2f90363d-9c7c-4ba1-b142-977bc3d81fd9" (UID: "2f90363d-9c7c-4ba1-b142-977bc3d81fd9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.577953 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-kube-api-access-59sb5" (OuterVolumeSpecName: "kube-api-access-59sb5") pod "2f90363d-9c7c-4ba1-b142-977bc3d81fd9" (UID: "2f90363d-9c7c-4ba1-b142-977bc3d81fd9"). InnerVolumeSpecName "kube-api-access-59sb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.623488 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f90363d-9c7c-4ba1-b142-977bc3d81fd9" (UID: "2f90363d-9c7c-4ba1-b142-977bc3d81fd9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.662855 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.663086 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.663167 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59sb5\" (UniqueName: \"kubernetes.io/projected/2f90363d-9c7c-4ba1-b142-977bc3d81fd9-kube-api-access-59sb5\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.970213 4833 generic.go:334] "Generic (PLEG): container finished" podID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" containerID="f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d" exitCode=0 Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.970259 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kb7qw" event={"ID":"2f90363d-9c7c-4ba1-b142-977bc3d81fd9","Type":"ContainerDied","Data":"f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d"} Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.970299 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kb7qw" event={"ID":"2f90363d-9c7c-4ba1-b142-977bc3d81fd9","Type":"ContainerDied","Data":"a15c2c4169cf7df96fbdd6138cb76e0351560bee633dd446a95ca6f1c6ee75a9"} Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.970295 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kb7qw" Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.970316 4833 scope.go:117] "RemoveContainer" containerID="f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d" Feb 19 13:12:52 crc kubenswrapper[4833]: I0219 13:12:52.998899 4833 scope.go:117] "RemoveContainer" containerID="3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed" Feb 19 13:12:53 crc kubenswrapper[4833]: I0219 13:12:53.011864 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kb7qw"] Feb 19 13:12:53 crc kubenswrapper[4833]: I0219 13:12:53.019653 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kb7qw"] Feb 19 13:12:53 crc kubenswrapper[4833]: I0219 13:12:53.041623 4833 scope.go:117] "RemoveContainer" containerID="3a6b6234b93ab140462708a94cb475b7c37b9c6dc35332066a311cd8c8395355" Feb 19 13:12:53 crc kubenswrapper[4833]: I0219 13:12:53.070013 4833 scope.go:117] "RemoveContainer" containerID="f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d" Feb 19 13:12:53 crc kubenswrapper[4833]: E0219 13:12:53.071974 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d\": container with ID starting with f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d not found: ID does not exist" containerID="f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d" Feb 19 13:12:53 crc kubenswrapper[4833]: I0219 13:12:53.072030 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d"} err="failed to get container status \"f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d\": rpc error: code = NotFound desc = could not find container \"f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d\": container with ID starting with f2d918409881aa97c3a18f14ec6d1e9fb19b0bbce5d50a941351a58796fa0d0d not found: ID does not exist" Feb 19 13:12:53 crc kubenswrapper[4833]: I0219 13:12:53.072057 4833 scope.go:117] "RemoveContainer" containerID="3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed" Feb 19 13:12:53 crc kubenswrapper[4833]: E0219 13:12:53.072448 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed\": container with ID starting with 3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed not found: ID does not exist" containerID="3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed" Feb 19 13:12:53 crc kubenswrapper[4833]: I0219 13:12:53.072688 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed"} err="failed to get container status \"3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed\": rpc error: code = NotFound desc = could not find container \"3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed\": container with ID starting with 3c847fe20375459a0b6f9828c87672fa3ed4f86bb6a17239f33c44d03dc119ed not found: ID does not exist" Feb 19 13:12:53 crc kubenswrapper[4833]: I0219 13:12:53.073299 4833 scope.go:117] "RemoveContainer" containerID="3a6b6234b93ab140462708a94cb475b7c37b9c6dc35332066a311cd8c8395355" Feb 19 13:12:53 crc kubenswrapper[4833]: E0219 13:12:53.073727 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a6b6234b93ab140462708a94cb475b7c37b9c6dc35332066a311cd8c8395355\": container with ID starting with 3a6b6234b93ab140462708a94cb475b7c37b9c6dc35332066a311cd8c8395355 not found: ID does not exist" containerID="3a6b6234b93ab140462708a94cb475b7c37b9c6dc35332066a311cd8c8395355" Feb 19 13:12:53 crc kubenswrapper[4833]: I0219 13:12:53.073775 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a6b6234b93ab140462708a94cb475b7c37b9c6dc35332066a311cd8c8395355"} err="failed to get container status \"3a6b6234b93ab140462708a94cb475b7c37b9c6dc35332066a311cd8c8395355\": rpc error: code = NotFound desc = could not find container \"3a6b6234b93ab140462708a94cb475b7c37b9c6dc35332066a311cd8c8395355\": container with ID starting with 3a6b6234b93ab140462708a94cb475b7c37b9c6dc35332066a311cd8c8395355 not found: ID does not exist" Feb 19 13:12:54 crc kubenswrapper[4833]: I0219 13:12:54.333961 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" path="/var/lib/kubelet/pods/2f90363d-9c7c-4ba1-b142-977bc3d81fd9/volumes" Feb 19 13:12:57 crc kubenswrapper[4833]: I0219 13:12:57.057735 4833 generic.go:334] "Generic (PLEG): container finished" podID="71fc87a7-2568-481c-a841-6500a69ba8b9" containerID="6539bba5763c60c9db39f5b3a1fbd7702ee2d910a033f0096496453cd7fa7f52" exitCode=0 Feb 19 13:12:57 crc kubenswrapper[4833]: I0219 13:12:57.057791 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" event={"ID":"71fc87a7-2568-481c-a841-6500a69ba8b9","Type":"ContainerDied","Data":"6539bba5763c60c9db39f5b3a1fbd7702ee2d910a033f0096496453cd7fa7f52"} Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.554922 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.684291 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-inventory\") pod \"71fc87a7-2568-481c-a841-6500a69ba8b9\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.684371 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-ssh-key-openstack-edpm-ipam\") pod \"71fc87a7-2568-481c-a841-6500a69ba8b9\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.684533 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55xlz\" (UniqueName: \"kubernetes.io/projected/71fc87a7-2568-481c-a841-6500a69ba8b9-kube-api-access-55xlz\") pod \"71fc87a7-2568-481c-a841-6500a69ba8b9\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.684615 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-bootstrap-combined-ca-bundle\") pod \"71fc87a7-2568-481c-a841-6500a69ba8b9\" (UID: \"71fc87a7-2568-481c-a841-6500a69ba8b9\") " Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.695706 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "71fc87a7-2568-481c-a841-6500a69ba8b9" (UID: "71fc87a7-2568-481c-a841-6500a69ba8b9"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.707823 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71fc87a7-2568-481c-a841-6500a69ba8b9-kube-api-access-55xlz" (OuterVolumeSpecName: "kube-api-access-55xlz") pod "71fc87a7-2568-481c-a841-6500a69ba8b9" (UID: "71fc87a7-2568-481c-a841-6500a69ba8b9"). InnerVolumeSpecName "kube-api-access-55xlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.724044 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-inventory" (OuterVolumeSpecName: "inventory") pod "71fc87a7-2568-481c-a841-6500a69ba8b9" (UID: "71fc87a7-2568-481c-a841-6500a69ba8b9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.740645 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71fc87a7-2568-481c-a841-6500a69ba8b9" (UID: "71fc87a7-2568-481c-a841-6500a69ba8b9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.786375 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.786421 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.786436 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55xlz\" (UniqueName: \"kubernetes.io/projected/71fc87a7-2568-481c-a841-6500a69ba8b9-kube-api-access-55xlz\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:58 crc kubenswrapper[4833]: I0219 13:12:58.786447 4833 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fc87a7-2568-481c-a841-6500a69ba8b9-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.077753 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" event={"ID":"71fc87a7-2568-481c-a841-6500a69ba8b9","Type":"ContainerDied","Data":"8a9d1aec3edcc4ef2e0d840b48cc15d81edf16dc86156b770093fe74b2bafb4c"} Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.077799 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a9d1aec3edcc4ef2e0d840b48cc15d81edf16dc86156b770093fe74b2bafb4c" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.077904 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.184469 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb"] Feb 19 13:12:59 crc kubenswrapper[4833]: E0219 13:12:59.185139 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" containerName="extract-content" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.185162 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" containerName="extract-content" Feb 19 13:12:59 crc kubenswrapper[4833]: E0219 13:12:59.185187 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71fc87a7-2568-481c-a841-6500a69ba8b9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.185197 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="71fc87a7-2568-481c-a841-6500a69ba8b9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 13:12:59 crc kubenswrapper[4833]: E0219 13:12:59.185217 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" containerName="extract-utilities" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.185225 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" containerName="extract-utilities" Feb 19 13:12:59 crc kubenswrapper[4833]: E0219 13:12:59.185252 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" containerName="registry-server" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.185259 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" containerName="registry-server" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.185483 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="71fc87a7-2568-481c-a841-6500a69ba8b9" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.185528 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f90363d-9c7c-4ba1-b142-977bc3d81fd9" containerName="registry-server" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.186305 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.188487 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.188762 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.188887 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.190287 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.204153 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb"] Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.295622 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.295730 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdbgs\" (UniqueName: \"kubernetes.io/projected/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-kube-api-access-qdbgs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.295918 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.397434 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.397801 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.397891 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdbgs\" (UniqueName: \"kubernetes.io/projected/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-kube-api-access-qdbgs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.403299 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.403805 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.414696 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdbgs\" (UniqueName: \"kubernetes.io/projected/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-kube-api-access-qdbgs\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:12:59 crc kubenswrapper[4833]: I0219 13:12:59.508566 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:13:00 crc kubenswrapper[4833]: I0219 13:13:00.081335 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb"] Feb 19 13:13:01 crc kubenswrapper[4833]: I0219 13:13:01.097097 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" event={"ID":"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1","Type":"ContainerStarted","Data":"9d04607679dc845d7749a30e45b2dc1cc80687bcf28d9985319314f1c77166a1"} Feb 19 13:13:01 crc kubenswrapper[4833]: I0219 13:13:01.097486 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" event={"ID":"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1","Type":"ContainerStarted","Data":"cb68576589d7a51fcddafd68c8403e5038500de8e2214bc71dd2a5a9eadf53bf"} Feb 19 13:13:01 crc kubenswrapper[4833]: I0219 13:13:01.123360 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" podStartSLOduration=1.5752967930000001 podStartE2EDuration="2.123316349s" podCreationTimestamp="2026-02-19 13:12:59 +0000 UTC" firstStartedPulling="2026-02-19 13:13:00.084367289 +0000 UTC m=+1590.479886067" lastFinishedPulling="2026-02-19 13:13:00.632386815 +0000 UTC m=+1591.027905623" observedRunningTime="2026-02-19 13:13:01.116344027 +0000 UTC m=+1591.511862835" watchObservedRunningTime="2026-02-19 13:13:01.123316349 +0000 UTC m=+1591.518835147" Feb 19 13:13:15 crc kubenswrapper[4833]: I0219 13:13:15.744199 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:13:15 crc kubenswrapper[4833]: I0219 13:13:15.744815 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.420043 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-725j7"] Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.423038 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.429049 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-725j7"] Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.499132 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-catalog-content\") pod \"redhat-marketplace-725j7\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.499512 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-utilities\") pod \"redhat-marketplace-725j7\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.499557 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tqxj\" (UniqueName: \"kubernetes.io/projected/911ad84d-7578-4dad-92d5-9a764385baaa-kube-api-access-9tqxj\") pod \"redhat-marketplace-725j7\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.601850 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-utilities\") pod \"redhat-marketplace-725j7\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.601906 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tqxj\" (UniqueName: \"kubernetes.io/projected/911ad84d-7578-4dad-92d5-9a764385baaa-kube-api-access-9tqxj\") pod \"redhat-marketplace-725j7\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.601949 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-catalog-content\") pod \"redhat-marketplace-725j7\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.602339 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-utilities\") pod \"redhat-marketplace-725j7\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.602437 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-catalog-content\") pod \"redhat-marketplace-725j7\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.626095 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tqxj\" (UniqueName: \"kubernetes.io/projected/911ad84d-7578-4dad-92d5-9a764385baaa-kube-api-access-9tqxj\") pod \"redhat-marketplace-725j7\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:34 crc kubenswrapper[4833]: I0219 13:13:34.747399 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:35 crc kubenswrapper[4833]: I0219 13:13:35.245243 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-725j7"] Feb 19 13:13:35 crc kubenswrapper[4833]: I0219 13:13:35.485900 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-725j7" event={"ID":"911ad84d-7578-4dad-92d5-9a764385baaa","Type":"ContainerStarted","Data":"ce3b68ea394a199834989ca499f840f9a6ab1e4688f45a6a960eeae39627403b"} Feb 19 13:13:36 crc kubenswrapper[4833]: I0219 13:13:36.500744 4833 generic.go:334] "Generic (PLEG): container finished" podID="911ad84d-7578-4dad-92d5-9a764385baaa" containerID="d3c47ac62eb0a6158487aaa80c20a615c9cfeaa2ae7fac2b01e44334f4af1394" exitCode=0 Feb 19 13:13:36 crc kubenswrapper[4833]: I0219 13:13:36.500870 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-725j7" event={"ID":"911ad84d-7578-4dad-92d5-9a764385baaa","Type":"ContainerDied","Data":"d3c47ac62eb0a6158487aaa80c20a615c9cfeaa2ae7fac2b01e44334f4af1394"} Feb 19 13:13:38 crc kubenswrapper[4833]: I0219 13:13:38.521937 4833 generic.go:334] "Generic (PLEG): container finished" podID="911ad84d-7578-4dad-92d5-9a764385baaa" containerID="45b4ae5429d585684a0f2bd8277d66f862db993e53f76940a6dd15eeab828a8b" exitCode=0 Feb 19 13:13:38 crc kubenswrapper[4833]: I0219 13:13:38.521998 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-725j7" event={"ID":"911ad84d-7578-4dad-92d5-9a764385baaa","Type":"ContainerDied","Data":"45b4ae5429d585684a0f2bd8277d66f862db993e53f76940a6dd15eeab828a8b"} Feb 19 13:13:39 crc kubenswrapper[4833]: I0219 13:13:39.533563 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-725j7" event={"ID":"911ad84d-7578-4dad-92d5-9a764385baaa","Type":"ContainerStarted","Data":"7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6"} Feb 19 13:13:39 crc kubenswrapper[4833]: I0219 13:13:39.562069 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-725j7" podStartSLOduration=2.890688877 podStartE2EDuration="5.562047867s" podCreationTimestamp="2026-02-19 13:13:34 +0000 UTC" firstStartedPulling="2026-02-19 13:13:36.503010143 +0000 UTC m=+1626.898528921" lastFinishedPulling="2026-02-19 13:13:39.174369133 +0000 UTC m=+1629.569887911" observedRunningTime="2026-02-19 13:13:39.552197748 +0000 UTC m=+1629.947716546" watchObservedRunningTime="2026-02-19 13:13:39.562047867 +0000 UTC m=+1629.957566635" Feb 19 13:13:44 crc kubenswrapper[4833]: I0219 13:13:44.748106 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:44 crc kubenswrapper[4833]: I0219 13:13:44.748817 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:44 crc kubenswrapper[4833]: I0219 13:13:44.824451 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:45 crc kubenswrapper[4833]: I0219 13:13:45.664870 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:45 crc kubenswrapper[4833]: I0219 13:13:45.727686 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-725j7"] Feb 19 13:13:45 crc kubenswrapper[4833]: I0219 13:13:45.745214 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:13:45 crc kubenswrapper[4833]: I0219 13:13:45.745312 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:13:45 crc kubenswrapper[4833]: I0219 13:13:45.745382 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 13:13:45 crc kubenswrapper[4833]: I0219 13:13:45.746374 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:13:45 crc kubenswrapper[4833]: I0219 13:13:45.746460 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" gracePeriod=600 Feb 19 13:13:45 crc kubenswrapper[4833]: E0219 13:13:45.875056 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:13:46 crc kubenswrapper[4833]: I0219 13:13:46.611663 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" exitCode=0 Feb 19 13:13:46 crc kubenswrapper[4833]: I0219 13:13:46.611746 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483"} Feb 19 13:13:46 crc kubenswrapper[4833]: I0219 13:13:46.611837 4833 scope.go:117] "RemoveContainer" containerID="44cd4d92890c7506a1edce4407a60145e4dd4d2e3ac145ff2d3b775c7a0f6b00" Feb 19 13:13:46 crc kubenswrapper[4833]: I0219 13:13:46.613066 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:13:46 crc kubenswrapper[4833]: E0219 13:13:46.613684 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:13:47 crc kubenswrapper[4833]: I0219 13:13:47.465950 4833 scope.go:117] "RemoveContainer" containerID="8ad97adb9e8b776d062bbe5524d1979b382ebfade3c23dbbbe218fb079f16b17" Feb 19 13:13:47 crc kubenswrapper[4833]: I0219 13:13:47.501902 4833 scope.go:117] "RemoveContainer" containerID="563715ad6f6398546ac35d27869876b072c2473ace9725a1602890cd511f5898" Feb 19 13:13:47 crc kubenswrapper[4833]: I0219 13:13:47.623075 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-725j7" podUID="911ad84d-7578-4dad-92d5-9a764385baaa" containerName="registry-server" containerID="cri-o://7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6" gracePeriod=2 Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.142386 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.179962 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tqxj\" (UniqueName: \"kubernetes.io/projected/911ad84d-7578-4dad-92d5-9a764385baaa-kube-api-access-9tqxj\") pod \"911ad84d-7578-4dad-92d5-9a764385baaa\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.180145 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-catalog-content\") pod \"911ad84d-7578-4dad-92d5-9a764385baaa\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.180223 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-utilities\") pod \"911ad84d-7578-4dad-92d5-9a764385baaa\" (UID: \"911ad84d-7578-4dad-92d5-9a764385baaa\") " Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.181444 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-utilities" (OuterVolumeSpecName: "utilities") pod "911ad84d-7578-4dad-92d5-9a764385baaa" (UID: "911ad84d-7578-4dad-92d5-9a764385baaa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.185365 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/911ad84d-7578-4dad-92d5-9a764385baaa-kube-api-access-9tqxj" (OuterVolumeSpecName: "kube-api-access-9tqxj") pod "911ad84d-7578-4dad-92d5-9a764385baaa" (UID: "911ad84d-7578-4dad-92d5-9a764385baaa"). InnerVolumeSpecName "kube-api-access-9tqxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.203343 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "911ad84d-7578-4dad-92d5-9a764385baaa" (UID: "911ad84d-7578-4dad-92d5-9a764385baaa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.282256 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tqxj\" (UniqueName: \"kubernetes.io/projected/911ad84d-7578-4dad-92d5-9a764385baaa-kube-api-access-9tqxj\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.282294 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.282306 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/911ad84d-7578-4dad-92d5-9a764385baaa-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.637108 4833 generic.go:334] "Generic (PLEG): container finished" podID="911ad84d-7578-4dad-92d5-9a764385baaa" containerID="7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6" exitCode=0 Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.637175 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-725j7" event={"ID":"911ad84d-7578-4dad-92d5-9a764385baaa","Type":"ContainerDied","Data":"7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6"} Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.637201 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-725j7" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.637226 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-725j7" event={"ID":"911ad84d-7578-4dad-92d5-9a764385baaa","Type":"ContainerDied","Data":"ce3b68ea394a199834989ca499f840f9a6ab1e4688f45a6a960eeae39627403b"} Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.637277 4833 scope.go:117] "RemoveContainer" containerID="7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.687673 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-725j7"] Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.688216 4833 scope.go:117] "RemoveContainer" containerID="45b4ae5429d585684a0f2bd8277d66f862db993e53f76940a6dd15eeab828a8b" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.699837 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-725j7"] Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.716256 4833 scope.go:117] "RemoveContainer" containerID="d3c47ac62eb0a6158487aaa80c20a615c9cfeaa2ae7fac2b01e44334f4af1394" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.797898 4833 scope.go:117] "RemoveContainer" containerID="7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6" Feb 19 13:13:48 crc kubenswrapper[4833]: E0219 13:13:48.798656 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6\": container with ID starting with 7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6 not found: ID does not exist" containerID="7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.798700 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6"} err="failed to get container status \"7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6\": rpc error: code = NotFound desc = could not find container \"7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6\": container with ID starting with 7df552217df053191efed77e14d1c7df4d545463e4a4839de1fa20a93733d5a6 not found: ID does not exist" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.798725 4833 scope.go:117] "RemoveContainer" containerID="45b4ae5429d585684a0f2bd8277d66f862db993e53f76940a6dd15eeab828a8b" Feb 19 13:13:48 crc kubenswrapper[4833]: E0219 13:13:48.799138 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45b4ae5429d585684a0f2bd8277d66f862db993e53f76940a6dd15eeab828a8b\": container with ID starting with 45b4ae5429d585684a0f2bd8277d66f862db993e53f76940a6dd15eeab828a8b not found: ID does not exist" containerID="45b4ae5429d585684a0f2bd8277d66f862db993e53f76940a6dd15eeab828a8b" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.799165 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45b4ae5429d585684a0f2bd8277d66f862db993e53f76940a6dd15eeab828a8b"} err="failed to get container status \"45b4ae5429d585684a0f2bd8277d66f862db993e53f76940a6dd15eeab828a8b\": rpc error: code = NotFound desc = could not find container \"45b4ae5429d585684a0f2bd8277d66f862db993e53f76940a6dd15eeab828a8b\": container with ID starting with 45b4ae5429d585684a0f2bd8277d66f862db993e53f76940a6dd15eeab828a8b not found: ID does not exist" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.799183 4833 scope.go:117] "RemoveContainer" containerID="d3c47ac62eb0a6158487aaa80c20a615c9cfeaa2ae7fac2b01e44334f4af1394" Feb 19 13:13:48 crc kubenswrapper[4833]: E0219 13:13:48.799433 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c47ac62eb0a6158487aaa80c20a615c9cfeaa2ae7fac2b01e44334f4af1394\": container with ID starting with d3c47ac62eb0a6158487aaa80c20a615c9cfeaa2ae7fac2b01e44334f4af1394 not found: ID does not exist" containerID="d3c47ac62eb0a6158487aaa80c20a615c9cfeaa2ae7fac2b01e44334f4af1394" Feb 19 13:13:48 crc kubenswrapper[4833]: I0219 13:13:48.799479 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c47ac62eb0a6158487aaa80c20a615c9cfeaa2ae7fac2b01e44334f4af1394"} err="failed to get container status \"d3c47ac62eb0a6158487aaa80c20a615c9cfeaa2ae7fac2b01e44334f4af1394\": rpc error: code = NotFound desc = could not find container \"d3c47ac62eb0a6158487aaa80c20a615c9cfeaa2ae7fac2b01e44334f4af1394\": container with ID starting with d3c47ac62eb0a6158487aaa80c20a615c9cfeaa2ae7fac2b01e44334f4af1394 not found: ID does not exist" Feb 19 13:13:50 crc kubenswrapper[4833]: I0219 13:13:50.335695 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="911ad84d-7578-4dad-92d5-9a764385baaa" path="/var/lib/kubelet/pods/911ad84d-7578-4dad-92d5-9a764385baaa/volumes" Feb 19 13:13:57 crc kubenswrapper[4833]: I0219 13:13:57.315233 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:13:57 crc kubenswrapper[4833]: E0219 13:13:57.316319 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:14:11 crc kubenswrapper[4833]: I0219 13:14:11.315097 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:14:11 crc kubenswrapper[4833]: E0219 13:14:11.315862 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:14:13 crc kubenswrapper[4833]: I0219 13:14:13.055379 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-987b-account-create-update-qf697"] Feb 19 13:14:13 crc kubenswrapper[4833]: I0219 13:14:13.066243 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-svb9r"] Feb 19 13:14:13 crc kubenswrapper[4833]: I0219 13:14:13.076610 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-987b-account-create-update-qf697"] Feb 19 13:14:13 crc kubenswrapper[4833]: I0219 13:14:13.087053 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-svb9r"] Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.030237 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0e4e-account-create-update-stpgm"] Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.041374 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78fb-account-create-update-q65b4"] Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.049614 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-scjhz"] Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.057842 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-scjhz"] Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.067306 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0e4e-account-create-update-stpgm"] Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.075401 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-s67c7"] Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.082674 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-78fb-account-create-update-q65b4"] Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.091225 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-s67c7"] Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.334356 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c210e9-a412-4019-b8ae-2bf67642661a" path="/var/lib/kubelet/pods/07c210e9-a412-4019-b8ae-2bf67642661a/volumes" Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.335590 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b26dd3-c5dd-447f-a2ef-6d92e5ad231a" path="/var/lib/kubelet/pods/09b26dd3-c5dd-447f-a2ef-6d92e5ad231a/volumes" Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.336779 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22d6b7eb-aa56-45d5-84c2-97d3732916db" path="/var/lib/kubelet/pods/22d6b7eb-aa56-45d5-84c2-97d3732916db/volumes" Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.338394 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78f84e8d-9afd-4bdc-9dde-cb633b5d7083" path="/var/lib/kubelet/pods/78f84e8d-9afd-4bdc-9dde-cb633b5d7083/volumes" Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.340933 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52037a8-0c1c-4c36-b1ca-f90b67f83ff0" path="/var/lib/kubelet/pods/a52037a8-0c1c-4c36-b1ca-f90b67f83ff0/volumes" Feb 19 13:14:14 crc kubenswrapper[4833]: I0219 13:14:14.341473 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0fb7215-a577-4756-8363-a4cba291a804" path="/var/lib/kubelet/pods/c0fb7215-a577-4756-8363-a4cba291a804/volumes" Feb 19 13:14:21 crc kubenswrapper[4833]: I0219 13:14:21.042218 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hhm6g"] Feb 19 13:14:21 crc kubenswrapper[4833]: I0219 13:14:21.054296 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hhm6g"] Feb 19 13:14:22 crc kubenswrapper[4833]: I0219 13:14:22.317486 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:14:22 crc kubenswrapper[4833]: E0219 13:14:22.317932 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:14:22 crc kubenswrapper[4833]: I0219 13:14:22.347315 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8906713d-1f2f-46ef-9895-b4a2cc5fddff" path="/var/lib/kubelet/pods/8906713d-1f2f-46ef-9895-b4a2cc5fddff/volumes" Feb 19 13:14:28 crc kubenswrapper[4833]: I0219 13:14:28.053102 4833 generic.go:334] "Generic (PLEG): container finished" podID="ade3dcb5-7a6a-4bef-a706-01dbd2d074a1" containerID="9d04607679dc845d7749a30e45b2dc1cc80687bcf28d9985319314f1c77166a1" exitCode=0 Feb 19 13:14:28 crc kubenswrapper[4833]: I0219 13:14:28.053234 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" event={"ID":"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1","Type":"ContainerDied","Data":"9d04607679dc845d7749a30e45b2dc1cc80687bcf28d9985319314f1c77166a1"} Feb 19 13:14:29 crc kubenswrapper[4833]: I0219 13:14:29.569653 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:14:29 crc kubenswrapper[4833]: I0219 13:14:29.721429 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdbgs\" (UniqueName: \"kubernetes.io/projected/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-kube-api-access-qdbgs\") pod \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " Feb 19 13:14:29 crc kubenswrapper[4833]: I0219 13:14:29.721684 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-inventory\") pod \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " Feb 19 13:14:29 crc kubenswrapper[4833]: I0219 13:14:29.721754 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-ssh-key-openstack-edpm-ipam\") pod \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\" (UID: \"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1\") " Feb 19 13:14:29 crc kubenswrapper[4833]: I0219 13:14:29.727538 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-kube-api-access-qdbgs" (OuterVolumeSpecName: "kube-api-access-qdbgs") pod "ade3dcb5-7a6a-4bef-a706-01dbd2d074a1" (UID: "ade3dcb5-7a6a-4bef-a706-01dbd2d074a1"). InnerVolumeSpecName "kube-api-access-qdbgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:14:29 crc kubenswrapper[4833]: I0219 13:14:29.748119 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ade3dcb5-7a6a-4bef-a706-01dbd2d074a1" (UID: "ade3dcb5-7a6a-4bef-a706-01dbd2d074a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:14:29 crc kubenswrapper[4833]: I0219 13:14:29.776183 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-inventory" (OuterVolumeSpecName: "inventory") pod "ade3dcb5-7a6a-4bef-a706-01dbd2d074a1" (UID: "ade3dcb5-7a6a-4bef-a706-01dbd2d074a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:14:29 crc kubenswrapper[4833]: I0219 13:14:29.824279 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdbgs\" (UniqueName: \"kubernetes.io/projected/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-kube-api-access-qdbgs\") on node \"crc\" DevicePath \"\"" Feb 19 13:14:29 crc kubenswrapper[4833]: I0219 13:14:29.824333 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:14:29 crc kubenswrapper[4833]: I0219 13:14:29.824354 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ade3dcb5-7a6a-4bef-a706-01dbd2d074a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.096749 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" event={"ID":"ade3dcb5-7a6a-4bef-a706-01dbd2d074a1","Type":"ContainerDied","Data":"cb68576589d7a51fcddafd68c8403e5038500de8e2214bc71dd2a5a9eadf53bf"} Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.096796 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb68576589d7a51fcddafd68c8403e5038500de8e2214bc71dd2a5a9eadf53bf" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.096844 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.208624 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf"] Feb 19 13:14:30 crc kubenswrapper[4833]: E0219 13:14:30.209107 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ad84d-7578-4dad-92d5-9a764385baaa" containerName="registry-server" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.209127 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ad84d-7578-4dad-92d5-9a764385baaa" containerName="registry-server" Feb 19 13:14:30 crc kubenswrapper[4833]: E0219 13:14:30.209151 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ad84d-7578-4dad-92d5-9a764385baaa" containerName="extract-content" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.209164 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ad84d-7578-4dad-92d5-9a764385baaa" containerName="extract-content" Feb 19 13:14:30 crc kubenswrapper[4833]: E0219 13:14:30.209184 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade3dcb5-7a6a-4bef-a706-01dbd2d074a1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.209196 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade3dcb5-7a6a-4bef-a706-01dbd2d074a1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 13:14:30 crc kubenswrapper[4833]: E0219 13:14:30.209254 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="911ad84d-7578-4dad-92d5-9a764385baaa" containerName="extract-utilities" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.209264 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="911ad84d-7578-4dad-92d5-9a764385baaa" containerName="extract-utilities" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.209654 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="911ad84d-7578-4dad-92d5-9a764385baaa" containerName="registry-server" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.209694 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade3dcb5-7a6a-4bef-a706-01dbd2d074a1" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.210709 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.215730 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.215754 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.215808 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.217081 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.228332 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf"] Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.232051 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.232162 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smvjm\" (UniqueName: \"kubernetes.io/projected/d1060705-48ca-43e4-8a72-0fbd655875a6-kube-api-access-smvjm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.232248 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.336123 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.336417 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smvjm\" (UniqueName: \"kubernetes.io/projected/d1060705-48ca-43e4-8a72-0fbd655875a6-kube-api-access-smvjm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.336613 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.338283 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.340877 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.368112 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.368290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smvjm\" (UniqueName: \"kubernetes.io/projected/d1060705-48ca-43e4-8a72-0fbd655875a6-kube-api-access-smvjm\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.372516 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.532904 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:14:30 crc kubenswrapper[4833]: I0219 13:14:30.542209 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:14:31 crc kubenswrapper[4833]: I0219 13:14:31.066726 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf"] Feb 19 13:14:31 crc kubenswrapper[4833]: I0219 13:14:31.106914 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" event={"ID":"d1060705-48ca-43e4-8a72-0fbd655875a6","Type":"ContainerStarted","Data":"4c7e832437bf309ca71ecba1611868eb08cd951aab4ee696a91790f0bbcca551"} Feb 19 13:14:31 crc kubenswrapper[4833]: I0219 13:14:31.666521 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:14:32 crc kubenswrapper[4833]: I0219 13:14:32.121946 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" event={"ID":"d1060705-48ca-43e4-8a72-0fbd655875a6","Type":"ContainerStarted","Data":"4bb6ddbe9ec18450fd6b22c4a04be50a39156f8889fe9470f1ae3704ca6ee4c3"} Feb 19 13:14:32 crc kubenswrapper[4833]: I0219 13:14:32.162299 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" podStartSLOduration=1.566564925 podStartE2EDuration="2.162273508s" podCreationTimestamp="2026-02-19 13:14:30 +0000 UTC" firstStartedPulling="2026-02-19 13:14:31.068312874 +0000 UTC m=+1681.463831642" lastFinishedPulling="2026-02-19 13:14:31.664021457 +0000 UTC m=+1682.059540225" observedRunningTime="2026-02-19 13:14:32.145404365 +0000 UTC m=+1682.540923223" watchObservedRunningTime="2026-02-19 13:14:32.162273508 +0000 UTC m=+1682.557792316" Feb 19 13:14:34 crc kubenswrapper[4833]: I0219 13:14:34.315734 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:14:34 crc kubenswrapper[4833]: E0219 13:14:34.316288 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:14:41 crc kubenswrapper[4833]: I0219 13:14:41.052287 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3cd9-account-create-update-6dwwd"] Feb 19 13:14:41 crc kubenswrapper[4833]: I0219 13:14:41.064314 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3cd9-account-create-update-6dwwd"] Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.053714 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ql56p"] Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.070632 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c4b6-account-create-update-58cbw"] Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.078745 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-85x95"] Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.086666 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b003-account-create-update-f27jp"] Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.095123 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-lgbvr"] Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.103385 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ql56p"] Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.110791 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c4b6-account-create-update-58cbw"] Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.118883 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b003-account-create-update-f27jp"] Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.126692 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-lgbvr"] Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.134617 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-85x95"] Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.330200 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2074acd6-4d68-4d93-84af-a608758fddd0" path="/var/lib/kubelet/pods/2074acd6-4d68-4d93-84af-a608758fddd0/volumes" Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.331387 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3686d9a5-edf5-4ab9-9c9d-6547cf2c6351" path="/var/lib/kubelet/pods/3686d9a5-edf5-4ab9-9c9d-6547cf2c6351/volumes" Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.332678 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a5eae49-48c5-4eb3-869c-1af4cea5877d" path="/var/lib/kubelet/pods/9a5eae49-48c5-4eb3-869c-1af4cea5877d/volumes" Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.333763 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2424429-364d-4bce-b9da-32ea56eae279" path="/var/lib/kubelet/pods/c2424429-364d-4bce-b9da-32ea56eae279/volumes" Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.336405 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a33236-11e5-4757-a143-f57fd4f5a5f4" path="/var/lib/kubelet/pods/c4a33236-11e5-4757-a143-f57fd4f5a5f4/volumes" Feb 19 13:14:42 crc kubenswrapper[4833]: I0219 13:14:42.337808 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d908d190-3ee3-4403-90d3-8635493b7b6c" path="/var/lib/kubelet/pods/d908d190-3ee3-4403-90d3-8635493b7b6c/volumes" Feb 19 13:14:44 crc kubenswrapper[4833]: I0219 13:14:44.031154 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-nl4pd"] Feb 19 13:14:44 crc kubenswrapper[4833]: I0219 13:14:44.041768 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-nl4pd"] Feb 19 13:14:44 crc kubenswrapper[4833]: I0219 13:14:44.337190 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ab56183-f7ec-44a3-95db-66064f67a074" path="/var/lib/kubelet/pods/8ab56183-f7ec-44a3-95db-66064f67a074/volumes" Feb 19 13:14:46 crc kubenswrapper[4833]: I0219 13:14:46.316304 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:14:46 crc kubenswrapper[4833]: E0219 13:14:46.316859 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.053329 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-h5szg"] Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.073566 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-h5szg"] Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.578907 4833 scope.go:117] "RemoveContainer" containerID="7fc32086bbe804cfa85dc737aee3b51c6ed822be6ce41ae28d21ccb9943a040d" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.619186 4833 scope.go:117] "RemoveContainer" containerID="17b0f8104604bfa715f7fea212a97b385d5a4b5459b23ea5dfce9269801791f2" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.647474 4833 scope.go:117] "RemoveContainer" containerID="f942a7e907365c058ad89b4344a5751b4456ede58e8f70233ca17deaac34287a" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.686892 4833 scope.go:117] "RemoveContainer" containerID="a2d5cefe73d91412b5c95c3a584bfca4b5de6e1fa70c2cfa27c36895c12203b2" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.734620 4833 scope.go:117] "RemoveContainer" containerID="cc7917898ca7e33ec77ecc5ba15b4c516f0c66d52bedcbbd0275b3eae18c39d0" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.794097 4833 scope.go:117] "RemoveContainer" containerID="09554af6585cf5629a934c5fa37e3596c8b3bfe50fea3741085a4b0bd1a2816e" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.819547 4833 scope.go:117] "RemoveContainer" containerID="534aa3b9ce24afc9a12522747b2c4a86edd9875760fb692528afb80e8cfa4e84" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.849527 4833 scope.go:117] "RemoveContainer" containerID="a221190702abb47c9dd475c526548391c9a956acbb95645d602f01a183d38dfd" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.870796 4833 scope.go:117] "RemoveContainer" containerID="19c825afeba88a6065f345e0ad6851bf0880eee2f9c49afde8667fa659ddcf05" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.895964 4833 scope.go:117] "RemoveContainer" containerID="b9617bc9bd8666bab0d214327eb3d35127c2cf485f058cfbb3fba9b8acbf2091" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.917248 4833 scope.go:117] "RemoveContainer" containerID="deabc5367851d3c222f1ba7839d84160efb5e1e124544fe5e9a142f0e3f83e3a" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.939100 4833 scope.go:117] "RemoveContainer" containerID="864e10b4af564a6797f4bc901076c885ca8781850292e5abd34b0daf2fc0f467" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.957429 4833 scope.go:117] "RemoveContainer" containerID="1611824dd08630a7b246b0f455581a5582ea29bc33214be853d2e64361223650" Feb 19 13:14:47 crc kubenswrapper[4833]: I0219 13:14:47.979436 4833 scope.go:117] "RemoveContainer" containerID="b1847606985b67e891fa82e74db42582387e187de7c23f539064e004cbe7a384" Feb 19 13:14:48 crc kubenswrapper[4833]: I0219 13:14:48.335745 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddecd476-ca49-4043-a064-b769163f4988" path="/var/lib/kubelet/pods/ddecd476-ca49-4043-a064-b769163f4988/volumes" Feb 19 13:14:59 crc kubenswrapper[4833]: I0219 13:14:59.315475 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:14:59 crc kubenswrapper[4833]: E0219 13:14:59.318611 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.149195 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj"] Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.150793 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.153939 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.155329 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.166415 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj"] Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.256204 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cff0514-b00f-44f8-8193-851e8e1c2716-config-volume\") pod \"collect-profiles-29525115-r54cj\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.256743 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5pss\" (UniqueName: \"kubernetes.io/projected/4cff0514-b00f-44f8-8193-851e8e1c2716-kube-api-access-b5pss\") pod \"collect-profiles-29525115-r54cj\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.256882 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cff0514-b00f-44f8-8193-851e8e1c2716-secret-volume\") pod \"collect-profiles-29525115-r54cj\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.358210 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cff0514-b00f-44f8-8193-851e8e1c2716-secret-volume\") pod \"collect-profiles-29525115-r54cj\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.358890 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cff0514-b00f-44f8-8193-851e8e1c2716-config-volume\") pod \"collect-profiles-29525115-r54cj\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.359049 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5pss\" (UniqueName: \"kubernetes.io/projected/4cff0514-b00f-44f8-8193-851e8e1c2716-kube-api-access-b5pss\") pod \"collect-profiles-29525115-r54cj\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.360445 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cff0514-b00f-44f8-8193-851e8e1c2716-config-volume\") pod \"collect-profiles-29525115-r54cj\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.373233 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cff0514-b00f-44f8-8193-851e8e1c2716-secret-volume\") pod \"collect-profiles-29525115-r54cj\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.375033 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5pss\" (UniqueName: \"kubernetes.io/projected/4cff0514-b00f-44f8-8193-851e8e1c2716-kube-api-access-b5pss\") pod \"collect-profiles-29525115-r54cj\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.487091 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:00 crc kubenswrapper[4833]: I0219 13:15:00.969592 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj"] Feb 19 13:15:00 crc kubenswrapper[4833]: W0219 13:15:00.971111 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cff0514_b00f_44f8_8193_851e8e1c2716.slice/crio-9ba60b0f00d3f0ef553b34b37c8f14f740ca6972cba35ed0aa820dafe447022e WatchSource:0}: Error finding container 9ba60b0f00d3f0ef553b34b37c8f14f740ca6972cba35ed0aa820dafe447022e: Status 404 returned error can't find the container with id 9ba60b0f00d3f0ef553b34b37c8f14f740ca6972cba35ed0aa820dafe447022e Feb 19 13:15:01 crc kubenswrapper[4833]: I0219 13:15:01.428779 4833 generic.go:334] "Generic (PLEG): container finished" podID="4cff0514-b00f-44f8-8193-851e8e1c2716" containerID="914662436eca46cd19990eedca74f5061baf0a75ee2122a618abbddb6a5a2d1d" exitCode=0 Feb 19 13:15:01 crc kubenswrapper[4833]: I0219 13:15:01.428823 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" event={"ID":"4cff0514-b00f-44f8-8193-851e8e1c2716","Type":"ContainerDied","Data":"914662436eca46cd19990eedca74f5061baf0a75ee2122a618abbddb6a5a2d1d"} Feb 19 13:15:01 crc kubenswrapper[4833]: I0219 13:15:01.428849 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" event={"ID":"4cff0514-b00f-44f8-8193-851e8e1c2716","Type":"ContainerStarted","Data":"9ba60b0f00d3f0ef553b34b37c8f14f740ca6972cba35ed0aa820dafe447022e"} Feb 19 13:15:02 crc kubenswrapper[4833]: I0219 13:15:02.760647 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:02 crc kubenswrapper[4833]: I0219 13:15:02.805624 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cff0514-b00f-44f8-8193-851e8e1c2716-secret-volume\") pod \"4cff0514-b00f-44f8-8193-851e8e1c2716\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " Feb 19 13:15:02 crc kubenswrapper[4833]: I0219 13:15:02.805986 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cff0514-b00f-44f8-8193-851e8e1c2716-config-volume\") pod \"4cff0514-b00f-44f8-8193-851e8e1c2716\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " Feb 19 13:15:02 crc kubenswrapper[4833]: I0219 13:15:02.806010 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5pss\" (UniqueName: \"kubernetes.io/projected/4cff0514-b00f-44f8-8193-851e8e1c2716-kube-api-access-b5pss\") pod \"4cff0514-b00f-44f8-8193-851e8e1c2716\" (UID: \"4cff0514-b00f-44f8-8193-851e8e1c2716\") " Feb 19 13:15:02 crc kubenswrapper[4833]: I0219 13:15:02.807745 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cff0514-b00f-44f8-8193-851e8e1c2716-config-volume" (OuterVolumeSpecName: "config-volume") pod "4cff0514-b00f-44f8-8193-851e8e1c2716" (UID: "4cff0514-b00f-44f8-8193-851e8e1c2716"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:15:02 crc kubenswrapper[4833]: I0219 13:15:02.813071 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cff0514-b00f-44f8-8193-851e8e1c2716-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4cff0514-b00f-44f8-8193-851e8e1c2716" (UID: "4cff0514-b00f-44f8-8193-851e8e1c2716"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:15:02 crc kubenswrapper[4833]: I0219 13:15:02.816764 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cff0514-b00f-44f8-8193-851e8e1c2716-kube-api-access-b5pss" (OuterVolumeSpecName: "kube-api-access-b5pss") pod "4cff0514-b00f-44f8-8193-851e8e1c2716" (UID: "4cff0514-b00f-44f8-8193-851e8e1c2716"). InnerVolumeSpecName "kube-api-access-b5pss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:15:02 crc kubenswrapper[4833]: I0219 13:15:02.907210 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4cff0514-b00f-44f8-8193-851e8e1c2716-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:02 crc kubenswrapper[4833]: I0219 13:15:02.907245 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5pss\" (UniqueName: \"kubernetes.io/projected/4cff0514-b00f-44f8-8193-851e8e1c2716-kube-api-access-b5pss\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:02 crc kubenswrapper[4833]: I0219 13:15:02.907259 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4cff0514-b00f-44f8-8193-851e8e1c2716-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:03 crc kubenswrapper[4833]: I0219 13:15:03.452672 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" event={"ID":"4cff0514-b00f-44f8-8193-851e8e1c2716","Type":"ContainerDied","Data":"9ba60b0f00d3f0ef553b34b37c8f14f740ca6972cba35ed0aa820dafe447022e"} Feb 19 13:15:03 crc kubenswrapper[4833]: I0219 13:15:03.452969 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ba60b0f00d3f0ef553b34b37c8f14f740ca6972cba35ed0aa820dafe447022e" Feb 19 13:15:03 crc kubenswrapper[4833]: I0219 13:15:03.452726 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj" Feb 19 13:15:11 crc kubenswrapper[4833]: I0219 13:15:11.314763 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:15:11 crc kubenswrapper[4833]: E0219 13:15:11.315484 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:15:19 crc kubenswrapper[4833]: I0219 13:15:19.048915 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-m6ckw"] Feb 19 13:15:19 crc kubenswrapper[4833]: I0219 13:15:19.057065 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-m6ckw"] Feb 19 13:15:20 crc kubenswrapper[4833]: I0219 13:15:20.326125 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8211e149-f236-498d-bc79-183c39d9d62e" path="/var/lib/kubelet/pods/8211e149-f236-498d-bc79-183c39d9d62e/volumes" Feb 19 13:15:26 crc kubenswrapper[4833]: I0219 13:15:26.315205 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:15:26 crc kubenswrapper[4833]: E0219 13:15:26.316446 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:15:29 crc kubenswrapper[4833]: I0219 13:15:29.032856 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-htb5q"] Feb 19 13:15:29 crc kubenswrapper[4833]: I0219 13:15:29.040418 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hjsb7"] Feb 19 13:15:29 crc kubenswrapper[4833]: I0219 13:15:29.047542 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hjsb7"] Feb 19 13:15:29 crc kubenswrapper[4833]: I0219 13:15:29.056022 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-htb5q"] Feb 19 13:15:30 crc kubenswrapper[4833]: I0219 13:15:30.330078 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bf42233-f79a-4f59-9db4-2aab3744b616" path="/var/lib/kubelet/pods/0bf42233-f79a-4f59-9db4-2aab3744b616/volumes" Feb 19 13:15:30 crc kubenswrapper[4833]: I0219 13:15:30.330902 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e40f6228-f038-4dc4-9180-f399b9a8c30b" path="/var/lib/kubelet/pods/e40f6228-f038-4dc4-9180-f399b9a8c30b/volumes" Feb 19 13:15:37 crc kubenswrapper[4833]: I0219 13:15:37.315861 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:15:37 crc kubenswrapper[4833]: E0219 13:15:37.316761 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:15:41 crc kubenswrapper[4833]: I0219 13:15:41.045976 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-hnxz9"] Feb 19 13:15:41 crc kubenswrapper[4833]: I0219 13:15:41.054153 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-hnxz9"] Feb 19 13:15:42 crc kubenswrapper[4833]: I0219 13:15:42.048874 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-dsz68"] Feb 19 13:15:42 crc kubenswrapper[4833]: I0219 13:15:42.064879 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-dsz68"] Feb 19 13:15:42 crc kubenswrapper[4833]: I0219 13:15:42.324614 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2902e7f1-6f1b-4b67-a9fa-fd031a961900" path="/var/lib/kubelet/pods/2902e7f1-6f1b-4b67-a9fa-fd031a961900/volumes" Feb 19 13:15:42 crc kubenswrapper[4833]: I0219 13:15:42.325419 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5aed427-a4af-40b6-bd9c-10284e0935ce" path="/var/lib/kubelet/pods/d5aed427-a4af-40b6-bd9c-10284e0935ce/volumes" Feb 19 13:15:45 crc kubenswrapper[4833]: I0219 13:15:45.861403 4833 generic.go:334] "Generic (PLEG): container finished" podID="d1060705-48ca-43e4-8a72-0fbd655875a6" containerID="4bb6ddbe9ec18450fd6b22c4a04be50a39156f8889fe9470f1ae3704ca6ee4c3" exitCode=0 Feb 19 13:15:45 crc kubenswrapper[4833]: I0219 13:15:45.861584 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" event={"ID":"d1060705-48ca-43e4-8a72-0fbd655875a6","Type":"ContainerDied","Data":"4bb6ddbe9ec18450fd6b22c4a04be50a39156f8889fe9470f1ae3704ca6ee4c3"} Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.332638 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.518938 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smvjm\" (UniqueName: \"kubernetes.io/projected/d1060705-48ca-43e4-8a72-0fbd655875a6-kube-api-access-smvjm\") pod \"d1060705-48ca-43e4-8a72-0fbd655875a6\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.519459 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-ssh-key-openstack-edpm-ipam\") pod \"d1060705-48ca-43e4-8a72-0fbd655875a6\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.519654 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-inventory\") pod \"d1060705-48ca-43e4-8a72-0fbd655875a6\" (UID: \"d1060705-48ca-43e4-8a72-0fbd655875a6\") " Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.527778 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1060705-48ca-43e4-8a72-0fbd655875a6-kube-api-access-smvjm" (OuterVolumeSpecName: "kube-api-access-smvjm") pod "d1060705-48ca-43e4-8a72-0fbd655875a6" (UID: "d1060705-48ca-43e4-8a72-0fbd655875a6"). InnerVolumeSpecName "kube-api-access-smvjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.548816 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d1060705-48ca-43e4-8a72-0fbd655875a6" (UID: "d1060705-48ca-43e4-8a72-0fbd655875a6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.570646 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-inventory" (OuterVolumeSpecName: "inventory") pod "d1060705-48ca-43e4-8a72-0fbd655875a6" (UID: "d1060705-48ca-43e4-8a72-0fbd655875a6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.622156 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smvjm\" (UniqueName: \"kubernetes.io/projected/d1060705-48ca-43e4-8a72-0fbd655875a6-kube-api-access-smvjm\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.622201 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.622221 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d1060705-48ca-43e4-8a72-0fbd655875a6-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.885869 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" event={"ID":"d1060705-48ca-43e4-8a72-0fbd655875a6","Type":"ContainerDied","Data":"4c7e832437bf309ca71ecba1611868eb08cd951aab4ee696a91790f0bbcca551"} Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.885920 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.885924 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c7e832437bf309ca71ecba1611868eb08cd951aab4ee696a91790f0bbcca551" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.975720 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv"] Feb 19 13:15:47 crc kubenswrapper[4833]: E0219 13:15:47.976139 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cff0514-b00f-44f8-8193-851e8e1c2716" containerName="collect-profiles" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.976161 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cff0514-b00f-44f8-8193-851e8e1c2716" containerName="collect-profiles" Feb 19 13:15:47 crc kubenswrapper[4833]: E0219 13:15:47.976187 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1060705-48ca-43e4-8a72-0fbd655875a6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.976197 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1060705-48ca-43e4-8a72-0fbd655875a6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.976405 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cff0514-b00f-44f8-8193-851e8e1c2716" containerName="collect-profiles" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.976429 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1060705-48ca-43e4-8a72-0fbd655875a6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.977169 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.979312 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.979632 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.980212 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.983754 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:15:47 crc kubenswrapper[4833]: I0219 13:15:47.998043 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv"] Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.139243 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.139300 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.139349 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dmq\" (UniqueName: \"kubernetes.io/projected/6533780f-2a0a-484f-afa5-ad561486e8a2-kube-api-access-d8dmq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.241964 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.242035 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.242279 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8dmq\" (UniqueName: \"kubernetes.io/projected/6533780f-2a0a-484f-afa5-ad561486e8a2-kube-api-access-d8dmq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.246641 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.250694 4833 scope.go:117] "RemoveContainer" containerID="d528054cc919c0f793501e938ac43720b97595b6f5a4ce47acdc323d668237ab" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.251764 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.259245 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8dmq\" (UniqueName: \"kubernetes.io/projected/6533780f-2a0a-484f-afa5-ad561486e8a2-kube-api-access-d8dmq\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.293076 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.433407 4833 scope.go:117] "RemoveContainer" containerID="7f14887fc30017601effe9d5f3e501f5612845e000c13764e672a7f1166a72b0" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.499771 4833 scope.go:117] "RemoveContainer" containerID="84126134a7ea5b8ebd3145f958543ebbf0257aa351da7b992b8d4c9a1e9a16da" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.551273 4833 scope.go:117] "RemoveContainer" containerID="173d88ab4c363a41bd4f046280df21d5e7eefc8835360560ad09c58b0eb7ba52" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.577108 4833 scope.go:117] "RemoveContainer" containerID="213d0de8612f1bb7799b5fbbfd3f3934c38c221687a61f0faa4369f0ebf2b9b8" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.600709 4833 scope.go:117] "RemoveContainer" containerID="38b1634674e1c50f0e04e2286162bbeb49033885f7e7e96701a5067e935c944b" Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.828162 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv"] Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.844587 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:15:48 crc kubenswrapper[4833]: I0219 13:15:48.894326 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" event={"ID":"6533780f-2a0a-484f-afa5-ad561486e8a2","Type":"ContainerStarted","Data":"99ace3744d1812e82d750304f12e6b630d94fd6b725e93e0c143e673fe3d0e8b"} Feb 19 13:15:49 crc kubenswrapper[4833]: I0219 13:15:49.315556 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:15:49 crc kubenswrapper[4833]: E0219 13:15:49.315885 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:15:51 crc kubenswrapper[4833]: I0219 13:15:51.928116 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" event={"ID":"6533780f-2a0a-484f-afa5-ad561486e8a2","Type":"ContainerStarted","Data":"2c55a0c2d96046254d697f6764c7204d6cab9345e3ae53855d79e83651a99c77"} Feb 19 13:15:51 crc kubenswrapper[4833]: I0219 13:15:51.955700 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" podStartSLOduration=3.060217464 podStartE2EDuration="4.955675963s" podCreationTimestamp="2026-02-19 13:15:47 +0000 UTC" firstStartedPulling="2026-02-19 13:15:48.844340602 +0000 UTC m=+1759.239859370" lastFinishedPulling="2026-02-19 13:15:50.739799101 +0000 UTC m=+1761.135317869" observedRunningTime="2026-02-19 13:15:51.949285678 +0000 UTC m=+1762.344804456" watchObservedRunningTime="2026-02-19 13:15:51.955675963 +0000 UTC m=+1762.351194771" Feb 19 13:15:55 crc kubenswrapper[4833]: I0219 13:15:55.968191 4833 generic.go:334] "Generic (PLEG): container finished" podID="6533780f-2a0a-484f-afa5-ad561486e8a2" containerID="2c55a0c2d96046254d697f6764c7204d6cab9345e3ae53855d79e83651a99c77" exitCode=0 Feb 19 13:15:55 crc kubenswrapper[4833]: I0219 13:15:55.968282 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" event={"ID":"6533780f-2a0a-484f-afa5-ad561486e8a2","Type":"ContainerDied","Data":"2c55a0c2d96046254d697f6764c7204d6cab9345e3ae53855d79e83651a99c77"} Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.406601 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.514638 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8dmq\" (UniqueName: \"kubernetes.io/projected/6533780f-2a0a-484f-afa5-ad561486e8a2-kube-api-access-d8dmq\") pod \"6533780f-2a0a-484f-afa5-ad561486e8a2\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.514758 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-ssh-key-openstack-edpm-ipam\") pod \"6533780f-2a0a-484f-afa5-ad561486e8a2\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.514814 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-inventory\") pod \"6533780f-2a0a-484f-afa5-ad561486e8a2\" (UID: \"6533780f-2a0a-484f-afa5-ad561486e8a2\") " Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.530723 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6533780f-2a0a-484f-afa5-ad561486e8a2-kube-api-access-d8dmq" (OuterVolumeSpecName: "kube-api-access-d8dmq") pod "6533780f-2a0a-484f-afa5-ad561486e8a2" (UID: "6533780f-2a0a-484f-afa5-ad561486e8a2"). InnerVolumeSpecName "kube-api-access-d8dmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.544218 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-inventory" (OuterVolumeSpecName: "inventory") pod "6533780f-2a0a-484f-afa5-ad561486e8a2" (UID: "6533780f-2a0a-484f-afa5-ad561486e8a2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.572653 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6533780f-2a0a-484f-afa5-ad561486e8a2" (UID: "6533780f-2a0a-484f-afa5-ad561486e8a2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.616771 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.616807 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6533780f-2a0a-484f-afa5-ad561486e8a2-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.616817 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8dmq\" (UniqueName: \"kubernetes.io/projected/6533780f-2a0a-484f-afa5-ad561486e8a2-kube-api-access-d8dmq\") on node \"crc\" DevicePath \"\"" Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.988539 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" event={"ID":"6533780f-2a0a-484f-afa5-ad561486e8a2","Type":"ContainerDied","Data":"99ace3744d1812e82d750304f12e6b630d94fd6b725e93e0c143e673fe3d0e8b"} Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.988589 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ace3744d1812e82d750304f12e6b630d94fd6b725e93e0c143e673fe3d0e8b" Feb 19 13:15:57 crc kubenswrapper[4833]: I0219 13:15:57.988655 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.065227 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln"] Feb 19 13:15:58 crc kubenswrapper[4833]: E0219 13:15:58.065722 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6533780f-2a0a-484f-afa5-ad561486e8a2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.065747 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="6533780f-2a0a-484f-afa5-ad561486e8a2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.065969 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="6533780f-2a0a-484f-afa5-ad561486e8a2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.066853 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.073127 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.073418 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.074671 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.076757 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.106972 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln"] Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.125409 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vj2ln\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.125460 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn4rr\" (UniqueName: \"kubernetes.io/projected/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-kube-api-access-tn4rr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vj2ln\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.125485 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vj2ln\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.227383 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vj2ln\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.227444 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn4rr\" (UniqueName: \"kubernetes.io/projected/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-kube-api-access-tn4rr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vj2ln\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.227470 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vj2ln\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.231397 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vj2ln\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.231742 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vj2ln\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.245411 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn4rr\" (UniqueName: \"kubernetes.io/projected/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-kube-api-access-tn4rr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-vj2ln\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.388309 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:15:58 crc kubenswrapper[4833]: I0219 13:15:58.953217 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln"] Feb 19 13:15:59 crc kubenswrapper[4833]: I0219 13:15:59.002008 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" event={"ID":"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b","Type":"ContainerStarted","Data":"f781036618d82f90f073b15cf56a6af114821e2b7e445b542e50930f3c661832"} Feb 19 13:16:00 crc kubenswrapper[4833]: I0219 13:16:00.016004 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" event={"ID":"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b","Type":"ContainerStarted","Data":"08c37836e32b5838a6117569fda4185fba79839da943f34394f108274f6c225f"} Feb 19 13:16:00 crc kubenswrapper[4833]: I0219 13:16:00.045291 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" podStartSLOduration=1.613235388 podStartE2EDuration="2.045267875s" podCreationTimestamp="2026-02-19 13:15:58 +0000 UTC" firstStartedPulling="2026-02-19 13:15:58.959544142 +0000 UTC m=+1769.355062910" lastFinishedPulling="2026-02-19 13:15:59.391576609 +0000 UTC m=+1769.787095397" observedRunningTime="2026-02-19 13:16:00.030076891 +0000 UTC m=+1770.425595669" watchObservedRunningTime="2026-02-19 13:16:00.045267875 +0000 UTC m=+1770.440786653" Feb 19 13:16:00 crc kubenswrapper[4833]: I0219 13:16:00.322207 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:16:00 crc kubenswrapper[4833]: E0219 13:16:00.322650 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:16:12 crc kubenswrapper[4833]: I0219 13:16:12.315301 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:16:12 crc kubenswrapper[4833]: E0219 13:16:12.316134 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:16:24 crc kubenswrapper[4833]: I0219 13:16:24.316014 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:16:24 crc kubenswrapper[4833]: E0219 13:16:24.316790 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:16:35 crc kubenswrapper[4833]: I0219 13:16:35.329434 4833 generic.go:334] "Generic (PLEG): container finished" podID="56fbaae9-eaee-4f1d-99b6-53bc919ecb4b" containerID="08c37836e32b5838a6117569fda4185fba79839da943f34394f108274f6c225f" exitCode=0 Feb 19 13:16:35 crc kubenswrapper[4833]: I0219 13:16:35.329543 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" event={"ID":"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b","Type":"ContainerDied","Data":"08c37836e32b5838a6117569fda4185fba79839da943f34394f108274f6c225f"} Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.072314 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2j7hx"] Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.086751 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-slhnv"] Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.097916 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2j7hx"] Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.107242 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-slhnv"] Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.116650 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d050-account-create-update-4w8nt"] Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.124994 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d050-account-create-update-4w8nt"] Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.325890 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="265af042-f6c4-4d9c-90a9-ae5305c7951e" path="/var/lib/kubelet/pods/265af042-f6c4-4d9c-90a9-ae5305c7951e/volumes" Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.326548 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdd856e6-ba47-4660-b0b1-7e6202b97bb5" path="/var/lib/kubelet/pods/bdd856e6-ba47-4660-b0b1-7e6202b97bb5/volumes" Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.327189 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4d96078-4bfc-49d9-b70b-1be6d9b29558" path="/var/lib/kubelet/pods/f4d96078-4bfc-49d9-b70b-1be6d9b29558/volumes" Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.735146 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.894348 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn4rr\" (UniqueName: \"kubernetes.io/projected/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-kube-api-access-tn4rr\") pod \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.894452 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-ssh-key-openstack-edpm-ipam\") pod \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.894541 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-inventory\") pod \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\" (UID: \"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b\") " Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.899786 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-kube-api-access-tn4rr" (OuterVolumeSpecName: "kube-api-access-tn4rr") pod "56fbaae9-eaee-4f1d-99b6-53bc919ecb4b" (UID: "56fbaae9-eaee-4f1d-99b6-53bc919ecb4b"). InnerVolumeSpecName "kube-api-access-tn4rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.948776 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-inventory" (OuterVolumeSpecName: "inventory") pod "56fbaae9-eaee-4f1d-99b6-53bc919ecb4b" (UID: "56fbaae9-eaee-4f1d-99b6-53bc919ecb4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.950016 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "56fbaae9-eaee-4f1d-99b6-53bc919ecb4b" (UID: "56fbaae9-eaee-4f1d-99b6-53bc919ecb4b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.996742 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn4rr\" (UniqueName: \"kubernetes.io/projected/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-kube-api-access-tn4rr\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.996789 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:36 crc kubenswrapper[4833]: I0219 13:16:36.996804 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56fbaae9-eaee-4f1d-99b6-53bc919ecb4b-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.046066 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-56bb-account-create-update-45wwp"] Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.056636 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7774-account-create-update-dcgfl"] Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.064737 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-bqvdg"] Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.073154 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7774-account-create-update-dcgfl"] Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.080670 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-56bb-account-create-update-45wwp"] Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.087629 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-bqvdg"] Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.345626 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" event={"ID":"56fbaae9-eaee-4f1d-99b6-53bc919ecb4b","Type":"ContainerDied","Data":"f781036618d82f90f073b15cf56a6af114821e2b7e445b542e50930f3c661832"} Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.345671 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-vj2ln" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.345670 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f781036618d82f90f073b15cf56a6af114821e2b7e445b542e50930f3c661832" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.434982 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4"] Feb 19 13:16:37 crc kubenswrapper[4833]: E0219 13:16:37.435479 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56fbaae9-eaee-4f1d-99b6-53bc919ecb4b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.435558 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="56fbaae9-eaee-4f1d-99b6-53bc919ecb4b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.435738 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="56fbaae9-eaee-4f1d-99b6-53bc919ecb4b" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.436382 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.438395 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.438450 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.438839 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.440218 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.442406 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4"] Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.607030 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64jhv\" (UniqueName: \"kubernetes.io/projected/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-kube-api-access-64jhv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.607197 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.607561 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.708604 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.708672 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64jhv\" (UniqueName: \"kubernetes.io/projected/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-kube-api-access-64jhv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.708706 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.715100 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.717799 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.726599 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64jhv\" (UniqueName: \"kubernetes.io/projected/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-kube-api-access-64jhv\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:16:37 crc kubenswrapper[4833]: I0219 13:16:37.752108 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:16:38 crc kubenswrapper[4833]: I0219 13:16:38.286262 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4"] Feb 19 13:16:38 crc kubenswrapper[4833]: I0219 13:16:38.315294 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:16:38 crc kubenswrapper[4833]: E0219 13:16:38.315645 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:16:38 crc kubenswrapper[4833]: I0219 13:16:38.326338 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dbf3254-d4fb-44d8-9f9e-637d9351d696" path="/var/lib/kubelet/pods/6dbf3254-d4fb-44d8-9f9e-637d9351d696/volumes" Feb 19 13:16:38 crc kubenswrapper[4833]: I0219 13:16:38.327974 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="872972e7-f010-4500-923f-9d29cf00bb60" path="/var/lib/kubelet/pods/872972e7-f010-4500-923f-9d29cf00bb60/volumes" Feb 19 13:16:38 crc kubenswrapper[4833]: I0219 13:16:38.328662 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3bce0ce-1de6-41b9-b947-d6deb44c40a7" path="/var/lib/kubelet/pods/f3bce0ce-1de6-41b9-b947-d6deb44c40a7/volumes" Feb 19 13:16:38 crc kubenswrapper[4833]: I0219 13:16:38.354648 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" event={"ID":"46a1f7d2-8e31-4ef8-8508-08be63d0fee2","Type":"ContainerStarted","Data":"2de2c205edd625f12477e2930791559855c1b4b959bcb07a77b4cb79548996c8"} Feb 19 13:16:39 crc kubenswrapper[4833]: I0219 13:16:39.369227 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" event={"ID":"46a1f7d2-8e31-4ef8-8508-08be63d0fee2","Type":"ContainerStarted","Data":"c9c28211b76e9454139d2b0b8d45cff22b4ada728d6525a2f25e28658f7d5de6"} Feb 19 13:16:39 crc kubenswrapper[4833]: I0219 13:16:39.385809 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" podStartSLOduration=1.897044693 podStartE2EDuration="2.385784428s" podCreationTimestamp="2026-02-19 13:16:37 +0000 UTC" firstStartedPulling="2026-02-19 13:16:38.287857399 +0000 UTC m=+1808.683376157" lastFinishedPulling="2026-02-19 13:16:38.776597114 +0000 UTC m=+1809.172115892" observedRunningTime="2026-02-19 13:16:39.385031358 +0000 UTC m=+1809.780550166" watchObservedRunningTime="2026-02-19 13:16:39.385784428 +0000 UTC m=+1809.781303236" Feb 19 13:16:48 crc kubenswrapper[4833]: I0219 13:16:48.844853 4833 scope.go:117] "RemoveContainer" containerID="69f25c28e08dfd5dda237d356bd4c2e14220158d36acc0d06b0ad42a7909c43a" Feb 19 13:16:48 crc kubenswrapper[4833]: I0219 13:16:48.876095 4833 scope.go:117] "RemoveContainer" containerID="2ef9069ddf43a8ca727bcd59b0fcb6abf87ee4beb694fe4a4aefa6188f172871" Feb 19 13:16:48 crc kubenswrapper[4833]: I0219 13:16:48.950366 4833 scope.go:117] "RemoveContainer" containerID="4a9c1d875d5757ee368a993a2ad717f9541c2f1fd000dab4586d7f4ae124c82e" Feb 19 13:16:49 crc kubenswrapper[4833]: I0219 13:16:49.000165 4833 scope.go:117] "RemoveContainer" containerID="58856ade3678574f8e42f2010360981657983c98bdc8707d02e40263e1a9acf6" Feb 19 13:16:49 crc kubenswrapper[4833]: I0219 13:16:49.042953 4833 scope.go:117] "RemoveContainer" containerID="b3387933afa0029cb71e5bfcbac3c59474e858313143df2f87dd6c8c275ef6e8" Feb 19 13:16:49 crc kubenswrapper[4833]: I0219 13:16:49.081116 4833 scope.go:117] "RemoveContainer" containerID="307630457cf72a3fd20dd4332c9dbd2eb622a546daacc4dd70f9716d4e4eb781" Feb 19 13:16:50 crc kubenswrapper[4833]: I0219 13:16:50.325187 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:16:50 crc kubenswrapper[4833]: E0219 13:16:50.325799 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:16:58 crc kubenswrapper[4833]: I0219 13:16:58.038990 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kws6n"] Feb 19 13:16:58 crc kubenswrapper[4833]: I0219 13:16:58.046631 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-kws6n"] Feb 19 13:16:58 crc kubenswrapper[4833]: I0219 13:16:58.327358 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4708f947-ca39-46bf-b1f2-35d28d6dc573" path="/var/lib/kubelet/pods/4708f947-ca39-46bf-b1f2-35d28d6dc573/volumes" Feb 19 13:17:01 crc kubenswrapper[4833]: I0219 13:17:01.316145 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:17:01 crc kubenswrapper[4833]: E0219 13:17:01.316948 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:17:12 crc kubenswrapper[4833]: I0219 13:17:12.315885 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:17:12 crc kubenswrapper[4833]: E0219 13:17:12.316952 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:17:20 crc kubenswrapper[4833]: I0219 13:17:20.065042 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xf2cl"] Feb 19 13:17:20 crc kubenswrapper[4833]: I0219 13:17:20.082604 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xf2cl"] Feb 19 13:17:20 crc kubenswrapper[4833]: I0219 13:17:20.326246 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e20c111-d37e-4ab6-8096-92fa51a4c4f8" path="/var/lib/kubelet/pods/3e20c111-d37e-4ab6-8096-92fa51a4c4f8/volumes" Feb 19 13:17:21 crc kubenswrapper[4833]: I0219 13:17:21.035426 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xgh42"] Feb 19 13:17:21 crc kubenswrapper[4833]: I0219 13:17:21.043509 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-xgh42"] Feb 19 13:17:22 crc kubenswrapper[4833]: I0219 13:17:22.334837 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9bfbaad-5149-4e04-baac-a9d0c0c569d0" path="/var/lib/kubelet/pods/d9bfbaad-5149-4e04-baac-a9d0c0c569d0/volumes" Feb 19 13:17:24 crc kubenswrapper[4833]: I0219 13:17:24.315264 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:17:24 crc kubenswrapper[4833]: E0219 13:17:24.315649 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:17:26 crc kubenswrapper[4833]: I0219 13:17:26.039064 4833 generic.go:334] "Generic (PLEG): container finished" podID="46a1f7d2-8e31-4ef8-8508-08be63d0fee2" containerID="c9c28211b76e9454139d2b0b8d45cff22b4ada728d6525a2f25e28658f7d5de6" exitCode=0 Feb 19 13:17:26 crc kubenswrapper[4833]: I0219 13:17:26.039135 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" event={"ID":"46a1f7d2-8e31-4ef8-8508-08be63d0fee2","Type":"ContainerDied","Data":"c9c28211b76e9454139d2b0b8d45cff22b4ada728d6525a2f25e28658f7d5de6"} Feb 19 13:17:27 crc kubenswrapper[4833]: I0219 13:17:27.442095 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:17:27 crc kubenswrapper[4833]: I0219 13:17:27.532778 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-ssh-key-openstack-edpm-ipam\") pod \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " Feb 19 13:17:27 crc kubenswrapper[4833]: I0219 13:17:27.533031 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-inventory\") pod \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " Feb 19 13:17:27 crc kubenswrapper[4833]: I0219 13:17:27.533070 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64jhv\" (UniqueName: \"kubernetes.io/projected/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-kube-api-access-64jhv\") pod \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\" (UID: \"46a1f7d2-8e31-4ef8-8508-08be63d0fee2\") " Feb 19 13:17:27 crc kubenswrapper[4833]: I0219 13:17:27.549875 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-kube-api-access-64jhv" (OuterVolumeSpecName: "kube-api-access-64jhv") pod "46a1f7d2-8e31-4ef8-8508-08be63d0fee2" (UID: "46a1f7d2-8e31-4ef8-8508-08be63d0fee2"). InnerVolumeSpecName "kube-api-access-64jhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:17:27 crc kubenswrapper[4833]: I0219 13:17:27.559829 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-inventory" (OuterVolumeSpecName: "inventory") pod "46a1f7d2-8e31-4ef8-8508-08be63d0fee2" (UID: "46a1f7d2-8e31-4ef8-8508-08be63d0fee2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:17:27 crc kubenswrapper[4833]: I0219 13:17:27.561909 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "46a1f7d2-8e31-4ef8-8508-08be63d0fee2" (UID: "46a1f7d2-8e31-4ef8-8508-08be63d0fee2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:17:27 crc kubenswrapper[4833]: I0219 13:17:27.634960 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:17:27 crc kubenswrapper[4833]: I0219 13:17:27.634993 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64jhv\" (UniqueName: \"kubernetes.io/projected/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-kube-api-access-64jhv\") on node \"crc\" DevicePath \"\"" Feb 19 13:17:27 crc kubenswrapper[4833]: I0219 13:17:27.635004 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/46a1f7d2-8e31-4ef8-8508-08be63d0fee2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.059331 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" event={"ID":"46a1f7d2-8e31-4ef8-8508-08be63d0fee2","Type":"ContainerDied","Data":"2de2c205edd625f12477e2930791559855c1b4b959bcb07a77b4cb79548996c8"} Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.059364 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2de2c205edd625f12477e2930791559855c1b4b959bcb07a77b4cb79548996c8" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.059395 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.144910 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fndfb"] Feb 19 13:17:28 crc kubenswrapper[4833]: E0219 13:17:28.145330 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46a1f7d2-8e31-4ef8-8508-08be63d0fee2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.145351 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="46a1f7d2-8e31-4ef8-8508-08be63d0fee2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.145613 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="46a1f7d2-8e31-4ef8-8508-08be63d0fee2" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.146374 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.151914 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.152003 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.152036 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.152183 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.154171 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fndfb"] Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.245372 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2c49\" (UniqueName: \"kubernetes.io/projected/f25a8685-c4bf-460a-a553-e26d1ddc9d09-kube-api-access-w2c49\") pod \"ssh-known-hosts-edpm-deployment-fndfb\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.245454 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fndfb\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.245487 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fndfb\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.347012 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2c49\" (UniqueName: \"kubernetes.io/projected/f25a8685-c4bf-460a-a553-e26d1ddc9d09-kube-api-access-w2c49\") pod \"ssh-known-hosts-edpm-deployment-fndfb\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.347104 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fndfb\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.347135 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fndfb\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.351401 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fndfb\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.353165 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fndfb\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.363599 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2c49\" (UniqueName: \"kubernetes.io/projected/f25a8685-c4bf-460a-a553-e26d1ddc9d09-kube-api-access-w2c49\") pod \"ssh-known-hosts-edpm-deployment-fndfb\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.464677 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:28 crc kubenswrapper[4833]: I0219 13:17:28.863141 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fndfb"] Feb 19 13:17:29 crc kubenswrapper[4833]: I0219 13:17:29.068473 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" event={"ID":"f25a8685-c4bf-460a-a553-e26d1ddc9d09","Type":"ContainerStarted","Data":"d82f60f726652a37f9a45e2d963c31ee6814927f157f373a51aec6c047f33732"} Feb 19 13:17:30 crc kubenswrapper[4833]: I0219 13:17:30.080359 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" event={"ID":"f25a8685-c4bf-460a-a553-e26d1ddc9d09","Type":"ContainerStarted","Data":"01e26aa28b253d39e5da2a60d806c0537bc7485559bd9374bdd7195f71f631c9"} Feb 19 13:17:30 crc kubenswrapper[4833]: I0219 13:17:30.102831 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" podStartSLOduration=1.670559653 podStartE2EDuration="2.102813673s" podCreationTimestamp="2026-02-19 13:17:28 +0000 UTC" firstStartedPulling="2026-02-19 13:17:28.870097196 +0000 UTC m=+1859.265615964" lastFinishedPulling="2026-02-19 13:17:29.302351196 +0000 UTC m=+1859.697869984" observedRunningTime="2026-02-19 13:17:30.097872324 +0000 UTC m=+1860.493391102" watchObservedRunningTime="2026-02-19 13:17:30.102813673 +0000 UTC m=+1860.498332441" Feb 19 13:17:35 crc kubenswrapper[4833]: I0219 13:17:35.315841 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:17:35 crc kubenswrapper[4833]: E0219 13:17:35.316556 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:17:37 crc kubenswrapper[4833]: I0219 13:17:37.133544 4833 generic.go:334] "Generic (PLEG): container finished" podID="f25a8685-c4bf-460a-a553-e26d1ddc9d09" containerID="01e26aa28b253d39e5da2a60d806c0537bc7485559bd9374bdd7195f71f631c9" exitCode=0 Feb 19 13:17:37 crc kubenswrapper[4833]: I0219 13:17:37.133608 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" event={"ID":"f25a8685-c4bf-460a-a553-e26d1ddc9d09","Type":"ContainerDied","Data":"01e26aa28b253d39e5da2a60d806c0537bc7485559bd9374bdd7195f71f631c9"} Feb 19 13:17:38 crc kubenswrapper[4833]: I0219 13:17:38.579672 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:38 crc kubenswrapper[4833]: I0219 13:17:38.754159 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-ssh-key-openstack-edpm-ipam\") pod \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " Feb 19 13:17:38 crc kubenswrapper[4833]: I0219 13:17:38.754296 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2c49\" (UniqueName: \"kubernetes.io/projected/f25a8685-c4bf-460a-a553-e26d1ddc9d09-kube-api-access-w2c49\") pod \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " Feb 19 13:17:38 crc kubenswrapper[4833]: I0219 13:17:38.754416 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-inventory-0\") pod \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\" (UID: \"f25a8685-c4bf-460a-a553-e26d1ddc9d09\") " Feb 19 13:17:38 crc kubenswrapper[4833]: I0219 13:17:38.760228 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25a8685-c4bf-460a-a553-e26d1ddc9d09-kube-api-access-w2c49" (OuterVolumeSpecName: "kube-api-access-w2c49") pod "f25a8685-c4bf-460a-a553-e26d1ddc9d09" (UID: "f25a8685-c4bf-460a-a553-e26d1ddc9d09"). InnerVolumeSpecName "kube-api-access-w2c49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:17:38 crc kubenswrapper[4833]: I0219 13:17:38.791373 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "f25a8685-c4bf-460a-a553-e26d1ddc9d09" (UID: "f25a8685-c4bf-460a-a553-e26d1ddc9d09"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:17:38 crc kubenswrapper[4833]: I0219 13:17:38.810257 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f25a8685-c4bf-460a-a553-e26d1ddc9d09" (UID: "f25a8685-c4bf-460a-a553-e26d1ddc9d09"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:17:38 crc kubenswrapper[4833]: I0219 13:17:38.856600 4833 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:17:38 crc kubenswrapper[4833]: I0219 13:17:38.856647 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f25a8685-c4bf-460a-a553-e26d1ddc9d09-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:17:38 crc kubenswrapper[4833]: I0219 13:17:38.856663 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2c49\" (UniqueName: \"kubernetes.io/projected/f25a8685-c4bf-460a-a553-e26d1ddc9d09-kube-api-access-w2c49\") on node \"crc\" DevicePath \"\"" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.155121 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" event={"ID":"f25a8685-c4bf-460a-a553-e26d1ddc9d09","Type":"ContainerDied","Data":"d82f60f726652a37f9a45e2d963c31ee6814927f157f373a51aec6c047f33732"} Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.155654 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d82f60f726652a37f9a45e2d963c31ee6814927f157f373a51aec6c047f33732" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.155168 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fndfb" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.238748 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs"] Feb 19 13:17:39 crc kubenswrapper[4833]: E0219 13:17:39.239208 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25a8685-c4bf-460a-a553-e26d1ddc9d09" containerName="ssh-known-hosts-edpm-deployment" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.239232 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25a8685-c4bf-460a-a553-e26d1ddc9d09" containerName="ssh-known-hosts-edpm-deployment" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.239459 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25a8685-c4bf-460a-a553-e26d1ddc9d09" containerName="ssh-known-hosts-edpm-deployment" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.240245 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.242185 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.242446 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.242941 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.243125 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.247295 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs"] Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.366232 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnrk4\" (UniqueName: \"kubernetes.io/projected/290637db-709b-4ce8-a200-76e9bf643d55-kube-api-access-rnrk4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8s8fs\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.366302 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8s8fs\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.366445 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8s8fs\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.472982 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnrk4\" (UniqueName: \"kubernetes.io/projected/290637db-709b-4ce8-a200-76e9bf643d55-kube-api-access-rnrk4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8s8fs\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.473069 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8s8fs\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.473207 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8s8fs\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.477225 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8s8fs\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.477752 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8s8fs\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.490546 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnrk4\" (UniqueName: \"kubernetes.io/projected/290637db-709b-4ce8-a200-76e9bf643d55-kube-api-access-rnrk4\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8s8fs\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:39 crc kubenswrapper[4833]: I0219 13:17:39.561005 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:40 crc kubenswrapper[4833]: I0219 13:17:40.101704 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs"] Feb 19 13:17:40 crc kubenswrapper[4833]: I0219 13:17:40.165449 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" event={"ID":"290637db-709b-4ce8-a200-76e9bf643d55","Type":"ContainerStarted","Data":"109bc32d2d4a328c1dcbe25878f159961f68aafbc27de2f468f83c96408355d4"} Feb 19 13:17:41 crc kubenswrapper[4833]: I0219 13:17:41.175715 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" event={"ID":"290637db-709b-4ce8-a200-76e9bf643d55","Type":"ContainerStarted","Data":"f2224fd4765aa16cf2e54a40df5ee04cbe64a5e5371fcdaaaf5dbf11f2f28580"} Feb 19 13:17:41 crc kubenswrapper[4833]: I0219 13:17:41.203928 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" podStartSLOduration=1.765138197 podStartE2EDuration="2.203910237s" podCreationTimestamp="2026-02-19 13:17:39 +0000 UTC" firstStartedPulling="2026-02-19 13:17:40.12126847 +0000 UTC m=+1870.516787238" lastFinishedPulling="2026-02-19 13:17:40.5600405 +0000 UTC m=+1870.955559278" observedRunningTime="2026-02-19 13:17:41.198077695 +0000 UTC m=+1871.593596473" watchObservedRunningTime="2026-02-19 13:17:41.203910237 +0000 UTC m=+1871.599429005" Feb 19 13:17:47 crc kubenswrapper[4833]: I0219 13:17:47.315325 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:17:47 crc kubenswrapper[4833]: E0219 13:17:47.316322 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:17:49 crc kubenswrapper[4833]: I0219 13:17:49.218194 4833 scope.go:117] "RemoveContainer" containerID="0c4e8e971796a06a16ae30f06e68b72ceaf71bc2d6404a5f5b1ddc3366307f00" Feb 19 13:17:49 crc kubenswrapper[4833]: I0219 13:17:49.254822 4833 generic.go:334] "Generic (PLEG): container finished" podID="290637db-709b-4ce8-a200-76e9bf643d55" containerID="f2224fd4765aa16cf2e54a40df5ee04cbe64a5e5371fcdaaaf5dbf11f2f28580" exitCode=0 Feb 19 13:17:49 crc kubenswrapper[4833]: I0219 13:17:49.254863 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" event={"ID":"290637db-709b-4ce8-a200-76e9bf643d55","Type":"ContainerDied","Data":"f2224fd4765aa16cf2e54a40df5ee04cbe64a5e5371fcdaaaf5dbf11f2f28580"} Feb 19 13:17:49 crc kubenswrapper[4833]: I0219 13:17:49.265830 4833 scope.go:117] "RemoveContainer" containerID="7570dfe81763a724dabe497998a8d726e3cc8ebf05bd8a8446f6cebee08c4bf3" Feb 19 13:17:49 crc kubenswrapper[4833]: I0219 13:17:49.341480 4833 scope.go:117] "RemoveContainer" containerID="3be9148bd920a373f0eac2d2a3fd31c80ddfd2bf7deea6c915740a03c089743c" Feb 19 13:17:50 crc kubenswrapper[4833]: I0219 13:17:50.678559 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:50 crc kubenswrapper[4833]: I0219 13:17:50.823815 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-inventory\") pod \"290637db-709b-4ce8-a200-76e9bf643d55\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " Feb 19 13:17:50 crc kubenswrapper[4833]: I0219 13:17:50.823967 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-ssh-key-openstack-edpm-ipam\") pod \"290637db-709b-4ce8-a200-76e9bf643d55\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " Feb 19 13:17:50 crc kubenswrapper[4833]: I0219 13:17:50.824237 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnrk4\" (UniqueName: \"kubernetes.io/projected/290637db-709b-4ce8-a200-76e9bf643d55-kube-api-access-rnrk4\") pod \"290637db-709b-4ce8-a200-76e9bf643d55\" (UID: \"290637db-709b-4ce8-a200-76e9bf643d55\") " Feb 19 13:17:50 crc kubenswrapper[4833]: I0219 13:17:50.829027 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/290637db-709b-4ce8-a200-76e9bf643d55-kube-api-access-rnrk4" (OuterVolumeSpecName: "kube-api-access-rnrk4") pod "290637db-709b-4ce8-a200-76e9bf643d55" (UID: "290637db-709b-4ce8-a200-76e9bf643d55"). InnerVolumeSpecName "kube-api-access-rnrk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:17:50 crc kubenswrapper[4833]: I0219 13:17:50.852429 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "290637db-709b-4ce8-a200-76e9bf643d55" (UID: "290637db-709b-4ce8-a200-76e9bf643d55"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:17:50 crc kubenswrapper[4833]: I0219 13:17:50.855323 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-inventory" (OuterVolumeSpecName: "inventory") pod "290637db-709b-4ce8-a200-76e9bf643d55" (UID: "290637db-709b-4ce8-a200-76e9bf643d55"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:17:50 crc kubenswrapper[4833]: I0219 13:17:50.926358 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:17:50 crc kubenswrapper[4833]: I0219 13:17:50.926388 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnrk4\" (UniqueName: \"kubernetes.io/projected/290637db-709b-4ce8-a200-76e9bf643d55-kube-api-access-rnrk4\") on node \"crc\" DevicePath \"\"" Feb 19 13:17:50 crc kubenswrapper[4833]: I0219 13:17:50.926400 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290637db-709b-4ce8-a200-76e9bf643d55-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.277162 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" event={"ID":"290637db-709b-4ce8-a200-76e9bf643d55","Type":"ContainerDied","Data":"109bc32d2d4a328c1dcbe25878f159961f68aafbc27de2f468f83c96408355d4"} Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.277214 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="109bc32d2d4a328c1dcbe25878f159961f68aafbc27de2f468f83c96408355d4" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.277219 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8s8fs" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.358483 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg"] Feb 19 13:17:51 crc kubenswrapper[4833]: E0219 13:17:51.359006 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290637db-709b-4ce8-a200-76e9bf643d55" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.359031 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="290637db-709b-4ce8-a200-76e9bf643d55" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.359246 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="290637db-709b-4ce8-a200-76e9bf643d55" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.359987 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.362869 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.363613 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.364869 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.365109 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.402726 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg"] Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.434709 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-442vd\" (UniqueName: \"kubernetes.io/projected/80c2b43f-f289-49a9-a544-08316b461536-kube-api-access-442vd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.434880 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.435017 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.537008 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-442vd\" (UniqueName: \"kubernetes.io/projected/80c2b43f-f289-49a9-a544-08316b461536-kube-api-access-442vd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.537831 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.538028 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.546208 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.546248 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.555660 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-442vd\" (UniqueName: \"kubernetes.io/projected/80c2b43f-f289-49a9-a544-08316b461536-kube-api-access-442vd\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:17:51 crc kubenswrapper[4833]: I0219 13:17:51.690941 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:17:52 crc kubenswrapper[4833]: I0219 13:17:52.237349 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg"] Feb 19 13:17:52 crc kubenswrapper[4833]: W0219 13:17:52.243154 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80c2b43f_f289_49a9_a544_08316b461536.slice/crio-d238efe9234ce3e8a89cf86ab11c18d016362d182c99b880e791d993584822bf WatchSource:0}: Error finding container d238efe9234ce3e8a89cf86ab11c18d016362d182c99b880e791d993584822bf: Status 404 returned error can't find the container with id d238efe9234ce3e8a89cf86ab11c18d016362d182c99b880e791d993584822bf Feb 19 13:17:52 crc kubenswrapper[4833]: I0219 13:17:52.293244 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" event={"ID":"80c2b43f-f289-49a9-a544-08316b461536","Type":"ContainerStarted","Data":"d238efe9234ce3e8a89cf86ab11c18d016362d182c99b880e791d993584822bf"} Feb 19 13:17:53 crc kubenswrapper[4833]: I0219 13:17:53.303244 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" event={"ID":"80c2b43f-f289-49a9-a544-08316b461536","Type":"ContainerStarted","Data":"652bc41b08970379effe960096316d5fa2aeacc1c11321dcd5ec094bf7822b84"} Feb 19 13:17:53 crc kubenswrapper[4833]: I0219 13:17:53.328521 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" podStartSLOduration=1.6359124189999998 podStartE2EDuration="2.328476187s" podCreationTimestamp="2026-02-19 13:17:51 +0000 UTC" firstStartedPulling="2026-02-19 13:17:52.245263418 +0000 UTC m=+1882.640782186" lastFinishedPulling="2026-02-19 13:17:52.937827146 +0000 UTC m=+1883.333345954" observedRunningTime="2026-02-19 13:17:53.317399065 +0000 UTC m=+1883.712917843" watchObservedRunningTime="2026-02-19 13:17:53.328476187 +0000 UTC m=+1883.723994965" Feb 19 13:17:58 crc kubenswrapper[4833]: I0219 13:17:58.314478 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:17:58 crc kubenswrapper[4833]: E0219 13:17:58.315232 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:18:03 crc kubenswrapper[4833]: I0219 13:18:03.425210 4833 generic.go:334] "Generic (PLEG): container finished" podID="80c2b43f-f289-49a9-a544-08316b461536" containerID="652bc41b08970379effe960096316d5fa2aeacc1c11321dcd5ec094bf7822b84" exitCode=0 Feb 19 13:18:03 crc kubenswrapper[4833]: I0219 13:18:03.425292 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" event={"ID":"80c2b43f-f289-49a9-a544-08316b461536","Type":"ContainerDied","Data":"652bc41b08970379effe960096316d5fa2aeacc1c11321dcd5ec094bf7822b84"} Feb 19 13:18:04 crc kubenswrapper[4833]: I0219 13:18:04.891946 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.009725 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-442vd\" (UniqueName: \"kubernetes.io/projected/80c2b43f-f289-49a9-a544-08316b461536-kube-api-access-442vd\") pod \"80c2b43f-f289-49a9-a544-08316b461536\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.009872 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-ssh-key-openstack-edpm-ipam\") pod \"80c2b43f-f289-49a9-a544-08316b461536\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.009919 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-inventory\") pod \"80c2b43f-f289-49a9-a544-08316b461536\" (UID: \"80c2b43f-f289-49a9-a544-08316b461536\") " Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.016567 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80c2b43f-f289-49a9-a544-08316b461536-kube-api-access-442vd" (OuterVolumeSpecName: "kube-api-access-442vd") pod "80c2b43f-f289-49a9-a544-08316b461536" (UID: "80c2b43f-f289-49a9-a544-08316b461536"). InnerVolumeSpecName "kube-api-access-442vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.051781 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-inventory" (OuterVolumeSpecName: "inventory") pod "80c2b43f-f289-49a9-a544-08316b461536" (UID: "80c2b43f-f289-49a9-a544-08316b461536"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.053026 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-b7bqt"] Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.058066 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "80c2b43f-f289-49a9-a544-08316b461536" (UID: "80c2b43f-f289-49a9-a544-08316b461536"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.060305 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-b7bqt"] Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.112894 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-442vd\" (UniqueName: \"kubernetes.io/projected/80c2b43f-f289-49a9-a544-08316b461536-kube-api-access-442vd\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.112944 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.112959 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/80c2b43f-f289-49a9-a544-08316b461536-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.445192 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" event={"ID":"80c2b43f-f289-49a9-a544-08316b461536","Type":"ContainerDied","Data":"d238efe9234ce3e8a89cf86ab11c18d016362d182c99b880e791d993584822bf"} Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.445553 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d238efe9234ce3e8a89cf86ab11c18d016362d182c99b880e791d993584822bf" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.445279 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.523284 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g"] Feb 19 13:18:05 crc kubenswrapper[4833]: E0219 13:18:05.523752 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80c2b43f-f289-49a9-a544-08316b461536" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.523777 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="80c2b43f-f289-49a9-a544-08316b461536" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.524007 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="80c2b43f-f289-49a9-a544-08316b461536" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.524811 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.527444 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.527652 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.527815 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.527953 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.528469 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.529378 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.529954 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.530015 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.532035 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g"] Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623405 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623453 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623476 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623578 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623619 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623661 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623697 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623728 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623749 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623770 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt2mj\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-kube-api-access-vt2mj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623796 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623823 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623838 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.623900 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.725560 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.725883 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.726101 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.726744 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.726919 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.727326 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.727521 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.727678 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.727794 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.727950 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt2mj\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-kube-api-access-vt2mj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.728110 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.728240 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.728348 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.728458 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.730476 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.730731 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.732551 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.733683 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.734285 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.734409 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.735024 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.735213 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.734589 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.737419 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.738241 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.738400 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.744800 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.746903 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt2mj\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-kube-api-access-vt2mj\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:05 crc kubenswrapper[4833]: I0219 13:18:05.847408 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:06 crc kubenswrapper[4833]: I0219 13:18:06.334134 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b18380f-3674-4e04-a34d-bf81ba3c58c8" path="/var/lib/kubelet/pods/5b18380f-3674-4e04-a34d-bf81ba3c58c8/volumes" Feb 19 13:18:06 crc kubenswrapper[4833]: I0219 13:18:06.396699 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g"] Feb 19 13:18:06 crc kubenswrapper[4833]: I0219 13:18:06.455585 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" event={"ID":"1abf51ed-df14-4ea8-a9df-e6ee9810e40e","Type":"ContainerStarted","Data":"504f7334bc788fa1431fabf01c5d92f9f1970bb5a71da5bb9b47e0a89ced7346"} Feb 19 13:18:07 crc kubenswrapper[4833]: I0219 13:18:07.466387 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" event={"ID":"1abf51ed-df14-4ea8-a9df-e6ee9810e40e","Type":"ContainerStarted","Data":"a222b53cc4003c3f215f816f99f705b0b99476a0c8500704459899d872fd52c8"} Feb 19 13:18:07 crc kubenswrapper[4833]: I0219 13:18:07.494248 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" podStartSLOduration=1.891769472 podStartE2EDuration="2.494226819s" podCreationTimestamp="2026-02-19 13:18:05 +0000 UTC" firstStartedPulling="2026-02-19 13:18:06.403889842 +0000 UTC m=+1896.799408610" lastFinishedPulling="2026-02-19 13:18:07.006347169 +0000 UTC m=+1897.401865957" observedRunningTime="2026-02-19 13:18:07.488707254 +0000 UTC m=+1897.884226022" watchObservedRunningTime="2026-02-19 13:18:07.494226819 +0000 UTC m=+1897.889745597" Feb 19 13:18:12 crc kubenswrapper[4833]: I0219 13:18:12.315991 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:18:12 crc kubenswrapper[4833]: E0219 13:18:12.317144 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:18:25 crc kubenswrapper[4833]: I0219 13:18:25.315652 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:18:25 crc kubenswrapper[4833]: E0219 13:18:25.316887 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:18:38 crc kubenswrapper[4833]: I0219 13:18:38.315576 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:18:38 crc kubenswrapper[4833]: E0219 13:18:38.316681 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:18:43 crc kubenswrapper[4833]: I0219 13:18:43.858172 4833 generic.go:334] "Generic (PLEG): container finished" podID="1abf51ed-df14-4ea8-a9df-e6ee9810e40e" containerID="a222b53cc4003c3f215f816f99f705b0b99476a0c8500704459899d872fd52c8" exitCode=0 Feb 19 13:18:43 crc kubenswrapper[4833]: I0219 13:18:43.858266 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" event={"ID":"1abf51ed-df14-4ea8-a9df-e6ee9810e40e","Type":"ContainerDied","Data":"a222b53cc4003c3f215f816f99f705b0b99476a0c8500704459899d872fd52c8"} Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.328300 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.457106 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.457206 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.457274 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-bootstrap-combined-ca-bundle\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.457374 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.457537 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-repo-setup-combined-ca-bundle\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.457630 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ovn-combined-ca-bundle\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.457717 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt2mj\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-kube-api-access-vt2mj\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.457812 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-nova-combined-ca-bundle\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.457942 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-telemetry-combined-ca-bundle\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.458035 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ssh-key-openstack-edpm-ipam\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.458091 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-inventory\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.458201 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-libvirt-combined-ca-bundle\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.458286 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.458363 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-neutron-metadata-combined-ca-bundle\") pod \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\" (UID: \"1abf51ed-df14-4ea8-a9df-e6ee9810e40e\") " Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.464947 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.466137 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.466158 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.467107 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.561297 4833 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.561321 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.561332 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.561342 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.753070 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.753217 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.754180 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.756715 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.757200 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-kube-api-access-vt2mj" (OuterVolumeSpecName: "kube-api-access-vt2mj") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "kube-api-access-vt2mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.757632 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.757807 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.758231 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.766012 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.766051 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.766065 4833 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.766076 4833 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.766088 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt2mj\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-kube-api-access-vt2mj\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.766098 4833 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.766108 4833 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.766119 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.779879 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-inventory" (OuterVolumeSpecName: "inventory") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.787405 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1abf51ed-df14-4ea8-a9df-e6ee9810e40e" (UID: "1abf51ed-df14-4ea8-a9df-e6ee9810e40e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.867988 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.868038 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1abf51ed-df14-4ea8-a9df-e6ee9810e40e-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.881660 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" event={"ID":"1abf51ed-df14-4ea8-a9df-e6ee9810e40e","Type":"ContainerDied","Data":"504f7334bc788fa1431fabf01c5d92f9f1970bb5a71da5bb9b47e0a89ced7346"} Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.881995 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="504f7334bc788fa1431fabf01c5d92f9f1970bb5a71da5bb9b47e0a89ced7346" Feb 19 13:18:45 crc kubenswrapper[4833]: I0219 13:18:45.881744 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.002778 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq"] Feb 19 13:18:46 crc kubenswrapper[4833]: E0219 13:18:46.003576 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1abf51ed-df14-4ea8-a9df-e6ee9810e40e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.003597 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1abf51ed-df14-4ea8-a9df-e6ee9810e40e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.004335 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1abf51ed-df14-4ea8-a9df-e6ee9810e40e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.005446 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.009956 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.010047 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.010075 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.010168 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.010748 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.027328 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq"] Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.072532 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5z9j\" (UniqueName: \"kubernetes.io/projected/3952291f-b3f9-4309-ae64-d6cbef7d6607-kube-api-access-g5z9j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.072597 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.072656 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.072940 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.073099 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.174647 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.174739 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.174802 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5z9j\" (UniqueName: \"kubernetes.io/projected/3952291f-b3f9-4309-ae64-d6cbef7d6607-kube-api-access-g5z9j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.174827 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.174875 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.175935 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.179098 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.179456 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.179549 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.191311 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5z9j\" (UniqueName: \"kubernetes.io/projected/3952291f-b3f9-4309-ae64-d6cbef7d6607-kube-api-access-g5z9j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-6hdvq\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.334806 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:18:46 crc kubenswrapper[4833]: I0219 13:18:46.885882 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq"] Feb 19 13:18:47 crc kubenswrapper[4833]: I0219 13:18:47.904741 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" event={"ID":"3952291f-b3f9-4309-ae64-d6cbef7d6607","Type":"ContainerStarted","Data":"8663a84a1c0eb6c6c98c7607395f9b74ab821abeb26488e986d467f9980bd80c"} Feb 19 13:18:47 crc kubenswrapper[4833]: I0219 13:18:47.905040 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" event={"ID":"3952291f-b3f9-4309-ae64-d6cbef7d6607","Type":"ContainerStarted","Data":"fdae2bcd697ae226a7218b491757b718cac2356e83e46175719657ca897d24cb"} Feb 19 13:18:47 crc kubenswrapper[4833]: I0219 13:18:47.937143 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" podStartSLOduration=2.469990109 podStartE2EDuration="2.937119532s" podCreationTimestamp="2026-02-19 13:18:45 +0000 UTC" firstStartedPulling="2026-02-19 13:18:46.899703679 +0000 UTC m=+1937.295222447" lastFinishedPulling="2026-02-19 13:18:47.366833092 +0000 UTC m=+1937.762351870" observedRunningTime="2026-02-19 13:18:47.924206372 +0000 UTC m=+1938.319725140" watchObservedRunningTime="2026-02-19 13:18:47.937119532 +0000 UTC m=+1938.332638300" Feb 19 13:18:49 crc kubenswrapper[4833]: I0219 13:18:49.470392 4833 scope.go:117] "RemoveContainer" containerID="0f9cbd4f0184c941dcf22b49fa070819df69445d7a107c5005dfce814fd7212f" Feb 19 13:18:51 crc kubenswrapper[4833]: I0219 13:18:51.315132 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:18:51 crc kubenswrapper[4833]: I0219 13:18:51.945400 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"9fc9b0a249b27484ad54a5d5192986aa294d70a00b6a6cd42401166586473b95"} Feb 19 13:19:47 crc kubenswrapper[4833]: I0219 13:19:47.467560 4833 generic.go:334] "Generic (PLEG): container finished" podID="3952291f-b3f9-4309-ae64-d6cbef7d6607" containerID="8663a84a1c0eb6c6c98c7607395f9b74ab821abeb26488e986d467f9980bd80c" exitCode=0 Feb 19 13:19:47 crc kubenswrapper[4833]: I0219 13:19:47.467712 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" event={"ID":"3952291f-b3f9-4309-ae64-d6cbef7d6607","Type":"ContainerDied","Data":"8663a84a1c0eb6c6c98c7607395f9b74ab821abeb26488e986d467f9980bd80c"} Feb 19 13:19:48 crc kubenswrapper[4833]: I0219 13:19:48.922270 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:19:48 crc kubenswrapper[4833]: I0219 13:19:48.988163 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-inventory\") pod \"3952291f-b3f9-4309-ae64-d6cbef7d6607\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " Feb 19 13:19:48 crc kubenswrapper[4833]: I0219 13:19:48.988236 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovncontroller-config-0\") pod \"3952291f-b3f9-4309-ae64-d6cbef7d6607\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " Feb 19 13:19:48 crc kubenswrapper[4833]: I0219 13:19:48.988292 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5z9j\" (UniqueName: \"kubernetes.io/projected/3952291f-b3f9-4309-ae64-d6cbef7d6607-kube-api-access-g5z9j\") pod \"3952291f-b3f9-4309-ae64-d6cbef7d6607\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " Feb 19 13:19:48 crc kubenswrapper[4833]: I0219 13:19:48.988331 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ssh-key-openstack-edpm-ipam\") pod \"3952291f-b3f9-4309-ae64-d6cbef7d6607\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " Feb 19 13:19:48 crc kubenswrapper[4833]: I0219 13:19:48.988430 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovn-combined-ca-bundle\") pod \"3952291f-b3f9-4309-ae64-d6cbef7d6607\" (UID: \"3952291f-b3f9-4309-ae64-d6cbef7d6607\") " Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.002659 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "3952291f-b3f9-4309-ae64-d6cbef7d6607" (UID: "3952291f-b3f9-4309-ae64-d6cbef7d6607"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.002751 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3952291f-b3f9-4309-ae64-d6cbef7d6607-kube-api-access-g5z9j" (OuterVolumeSpecName: "kube-api-access-g5z9j") pod "3952291f-b3f9-4309-ae64-d6cbef7d6607" (UID: "3952291f-b3f9-4309-ae64-d6cbef7d6607"). InnerVolumeSpecName "kube-api-access-g5z9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.015200 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3952291f-b3f9-4309-ae64-d6cbef7d6607" (UID: "3952291f-b3f9-4309-ae64-d6cbef7d6607"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.016358 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-inventory" (OuterVolumeSpecName: "inventory") pod "3952291f-b3f9-4309-ae64-d6cbef7d6607" (UID: "3952291f-b3f9-4309-ae64-d6cbef7d6607"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.018274 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "3952291f-b3f9-4309-ae64-d6cbef7d6607" (UID: "3952291f-b3f9-4309-ae64-d6cbef7d6607"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.089806 4833 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.089976 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.090059 4833 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/3952291f-b3f9-4309-ae64-d6cbef7d6607-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.090118 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5z9j\" (UniqueName: \"kubernetes.io/projected/3952291f-b3f9-4309-ae64-d6cbef7d6607-kube-api-access-g5z9j\") on node \"crc\" DevicePath \"\"" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.090170 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3952291f-b3f9-4309-ae64-d6cbef7d6607-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.498895 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" event={"ID":"3952291f-b3f9-4309-ae64-d6cbef7d6607","Type":"ContainerDied","Data":"fdae2bcd697ae226a7218b491757b718cac2356e83e46175719657ca897d24cb"} Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.499387 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdae2bcd697ae226a7218b491757b718cac2356e83e46175719657ca897d24cb" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.499033 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-6hdvq" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.586845 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx"] Feb 19 13:19:49 crc kubenswrapper[4833]: E0219 13:19:49.587303 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3952291f-b3f9-4309-ae64-d6cbef7d6607" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.587329 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3952291f-b3f9-4309-ae64-d6cbef7d6607" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.587662 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3952291f-b3f9-4309-ae64-d6cbef7d6607" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.588409 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.599018 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.599336 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.600939 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.600954 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.600993 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.601254 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.612423 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx"] Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.702420 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.702513 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5c7m\" (UniqueName: \"kubernetes.io/projected/5e2ba26c-7bab-411e-80f6-bf1e77dce436-kube-api-access-k5c7m\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.702614 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.702653 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.702688 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.702708 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.803894 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5c7m\" (UniqueName: \"kubernetes.io/projected/5e2ba26c-7bab-411e-80f6-bf1e77dce436-kube-api-access-k5c7m\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.804148 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.804307 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.804451 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.804590 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.804751 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.808908 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.808930 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.811780 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.814534 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.816227 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.834575 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5c7m\" (UniqueName: \"kubernetes.io/projected/5e2ba26c-7bab-411e-80f6-bf1e77dce436-kube-api-access-k5c7m\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:49 crc kubenswrapper[4833]: I0219 13:19:49.911480 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:19:50 crc kubenswrapper[4833]: I0219 13:19:50.498140 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx"] Feb 19 13:19:51 crc kubenswrapper[4833]: I0219 13:19:51.517042 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" event={"ID":"5e2ba26c-7bab-411e-80f6-bf1e77dce436","Type":"ContainerStarted","Data":"09608c3889c79bdfaa0f2a8166a3786e00b846be00cebbe315f0d9a00e30ac72"} Feb 19 13:19:51 crc kubenswrapper[4833]: I0219 13:19:51.517561 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" event={"ID":"5e2ba26c-7bab-411e-80f6-bf1e77dce436","Type":"ContainerStarted","Data":"a1229c76fc68c31d54162c41e8d5af7e34ab6455890a4586b2b13e69c38d466b"} Feb 19 13:19:51 crc kubenswrapper[4833]: I0219 13:19:51.539517 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" podStartSLOduration=2.100683272 podStartE2EDuration="2.53948486s" podCreationTimestamp="2026-02-19 13:19:49 +0000 UTC" firstStartedPulling="2026-02-19 13:19:50.507971221 +0000 UTC m=+2000.903489989" lastFinishedPulling="2026-02-19 13:19:50.946772809 +0000 UTC m=+2001.342291577" observedRunningTime="2026-02-19 13:19:51.538007621 +0000 UTC m=+2001.933526389" watchObservedRunningTime="2026-02-19 13:19:51.53948486 +0000 UTC m=+2001.935003628" Feb 19 13:20:40 crc kubenswrapper[4833]: I0219 13:20:40.003926 4833 generic.go:334] "Generic (PLEG): container finished" podID="5e2ba26c-7bab-411e-80f6-bf1e77dce436" containerID="09608c3889c79bdfaa0f2a8166a3786e00b846be00cebbe315f0d9a00e30ac72" exitCode=0 Feb 19 13:20:40 crc kubenswrapper[4833]: I0219 13:20:40.003975 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" event={"ID":"5e2ba26c-7bab-411e-80f6-bf1e77dce436","Type":"ContainerDied","Data":"09608c3889c79bdfaa0f2a8166a3786e00b846be00cebbe315f0d9a00e30ac72"} Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.420557 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.515030 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-nova-metadata-neutron-config-0\") pod \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.515117 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5c7m\" (UniqueName: \"kubernetes.io/projected/5e2ba26c-7bab-411e-80f6-bf1e77dce436-kube-api-access-k5c7m\") pod \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.515176 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-metadata-combined-ca-bundle\") pod \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.515264 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-ssh-key-openstack-edpm-ipam\") pod \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.515291 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-ovn-metadata-agent-neutron-config-0\") pod \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.515451 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-inventory\") pod \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\" (UID: \"5e2ba26c-7bab-411e-80f6-bf1e77dce436\") " Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.520439 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2ba26c-7bab-411e-80f6-bf1e77dce436-kube-api-access-k5c7m" (OuterVolumeSpecName: "kube-api-access-k5c7m") pod "5e2ba26c-7bab-411e-80f6-bf1e77dce436" (UID: "5e2ba26c-7bab-411e-80f6-bf1e77dce436"). InnerVolumeSpecName "kube-api-access-k5c7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.522172 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5e2ba26c-7bab-411e-80f6-bf1e77dce436" (UID: "5e2ba26c-7bab-411e-80f6-bf1e77dce436"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.539731 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "5e2ba26c-7bab-411e-80f6-bf1e77dce436" (UID: "5e2ba26c-7bab-411e-80f6-bf1e77dce436"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.542854 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-inventory" (OuterVolumeSpecName: "inventory") pod "5e2ba26c-7bab-411e-80f6-bf1e77dce436" (UID: "5e2ba26c-7bab-411e-80f6-bf1e77dce436"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.546053 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5e2ba26c-7bab-411e-80f6-bf1e77dce436" (UID: "5e2ba26c-7bab-411e-80f6-bf1e77dce436"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.556936 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "5e2ba26c-7bab-411e-80f6-bf1e77dce436" (UID: "5e2ba26c-7bab-411e-80f6-bf1e77dce436"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.618770 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.619165 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.619194 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.619220 4833 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.619556 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5c7m\" (UniqueName: \"kubernetes.io/projected/5e2ba26c-7bab-411e-80f6-bf1e77dce436-kube-api-access-k5c7m\") on node \"crc\" DevicePath \"\"" Feb 19 13:20:41 crc kubenswrapper[4833]: I0219 13:20:41.619601 4833 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e2ba26c-7bab-411e-80f6-bf1e77dce436-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.031265 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" event={"ID":"5e2ba26c-7bab-411e-80f6-bf1e77dce436","Type":"ContainerDied","Data":"a1229c76fc68c31d54162c41e8d5af7e34ab6455890a4586b2b13e69c38d466b"} Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.031317 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1229c76fc68c31d54162c41e8d5af7e34ab6455890a4586b2b13e69c38d466b" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.031425 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.131775 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv"] Feb 19 13:20:42 crc kubenswrapper[4833]: E0219 13:20:42.132336 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2ba26c-7bab-411e-80f6-bf1e77dce436" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.132360 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2ba26c-7bab-411e-80f6-bf1e77dce436" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.132602 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2ba26c-7bab-411e-80f6-bf1e77dce436" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.133462 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.137546 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.137665 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.137834 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.137908 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.138745 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.141715 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv"] Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.233766 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.234166 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x9xn\" (UniqueName: \"kubernetes.io/projected/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-kube-api-access-8x9xn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.234365 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.234575 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.234700 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.337250 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x9xn\" (UniqueName: \"kubernetes.io/projected/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-kube-api-access-8x9xn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.337340 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.337472 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.337555 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.337648 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.342952 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.344003 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.344503 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.346251 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.372252 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x9xn\" (UniqueName: \"kubernetes.io/projected/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-kube-api-access-8x9xn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:42 crc kubenswrapper[4833]: I0219 13:20:42.456098 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:20:43 crc kubenswrapper[4833]: I0219 13:20:43.010055 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv"] Feb 19 13:20:43 crc kubenswrapper[4833]: I0219 13:20:43.045587 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" event={"ID":"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0","Type":"ContainerStarted","Data":"f3a5715b7140705f896ae4f066ecaa39c4e0512cfb7cf1f6cc78da7ec4776d2a"} Feb 19 13:20:44 crc kubenswrapper[4833]: I0219 13:20:44.061061 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" event={"ID":"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0","Type":"ContainerStarted","Data":"3cd75de870cdf5f700a660229f0c0b2cf551494e190674b6baab175c48e9ba6c"} Feb 19 13:20:44 crc kubenswrapper[4833]: I0219 13:20:44.087473 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" podStartSLOduration=1.697863728 podStartE2EDuration="2.08745202s" podCreationTimestamp="2026-02-19 13:20:42 +0000 UTC" firstStartedPulling="2026-02-19 13:20:43.025847614 +0000 UTC m=+2053.421366382" lastFinishedPulling="2026-02-19 13:20:43.415435906 +0000 UTC m=+2053.810954674" observedRunningTime="2026-02-19 13:20:44.08022324 +0000 UTC m=+2054.475742008" watchObservedRunningTime="2026-02-19 13:20:44.08745202 +0000 UTC m=+2054.482970788" Feb 19 13:20:45 crc kubenswrapper[4833]: I0219 13:20:45.958863 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dg8qb"] Feb 19 13:20:45 crc kubenswrapper[4833]: I0219 13:20:45.960687 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:45 crc kubenswrapper[4833]: I0219 13:20:45.973353 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dg8qb"] Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.097608 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn9sx\" (UniqueName: \"kubernetes.io/projected/289fe56b-8980-40a4-80e3-d53066ee2ac3-kube-api-access-qn9sx\") pod \"redhat-operators-dg8qb\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.097972 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-utilities\") pod \"redhat-operators-dg8qb\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.098180 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-catalog-content\") pod \"redhat-operators-dg8qb\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.199811 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-utilities\") pod \"redhat-operators-dg8qb\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.199926 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-catalog-content\") pod \"redhat-operators-dg8qb\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.200080 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn9sx\" (UniqueName: \"kubernetes.io/projected/289fe56b-8980-40a4-80e3-d53066ee2ac3-kube-api-access-qn9sx\") pod \"redhat-operators-dg8qb\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.200604 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-catalog-content\") pod \"redhat-operators-dg8qb\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.200624 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-utilities\") pod \"redhat-operators-dg8qb\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.219402 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn9sx\" (UniqueName: \"kubernetes.io/projected/289fe56b-8980-40a4-80e3-d53066ee2ac3-kube-api-access-qn9sx\") pod \"redhat-operators-dg8qb\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.287609 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.786726 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dg8qb"] Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.951182 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t9ld9"] Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.953452 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:46 crc kubenswrapper[4833]: I0219 13:20:46.980047 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t9ld9"] Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.121567 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-catalog-content\") pod \"community-operators-t9ld9\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.121671 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2f8k\" (UniqueName: \"kubernetes.io/projected/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-kube-api-access-m2f8k\") pod \"community-operators-t9ld9\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.121724 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-utilities\") pod \"community-operators-t9ld9\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.142257 4833 generic.go:334] "Generic (PLEG): container finished" podID="289fe56b-8980-40a4-80e3-d53066ee2ac3" containerID="ef7c66ffccba855313fa598810ce111bc8764422ee6fe9d77e51ebdee9d52d25" exitCode=0 Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.142320 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg8qb" event={"ID":"289fe56b-8980-40a4-80e3-d53066ee2ac3","Type":"ContainerDied","Data":"ef7c66ffccba855313fa598810ce111bc8764422ee6fe9d77e51ebdee9d52d25"} Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.142346 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg8qb" event={"ID":"289fe56b-8980-40a4-80e3-d53066ee2ac3","Type":"ContainerStarted","Data":"05b6d71123dfbecfe0fb6a8cd6b5c24658b71ca957104579958f24cd9bf4dc49"} Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.223494 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2f8k\" (UniqueName: \"kubernetes.io/projected/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-kube-api-access-m2f8k\") pod \"community-operators-t9ld9\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.223604 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-utilities\") pod \"community-operators-t9ld9\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.223680 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-catalog-content\") pod \"community-operators-t9ld9\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.224159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-catalog-content\") pod \"community-operators-t9ld9\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.224234 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-utilities\") pod \"community-operators-t9ld9\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.245965 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2f8k\" (UniqueName: \"kubernetes.io/projected/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-kube-api-access-m2f8k\") pod \"community-operators-t9ld9\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.278942 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:47 crc kubenswrapper[4833]: I0219 13:20:47.775597 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t9ld9"] Feb 19 13:20:48 crc kubenswrapper[4833]: I0219 13:20:48.153986 4833 generic.go:334] "Generic (PLEG): container finished" podID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" containerID="e56a5c33f25bf0286227f65792ac536a7e4033531c01a8c4f7614ca61d9287ff" exitCode=0 Feb 19 13:20:48 crc kubenswrapper[4833]: I0219 13:20:48.154065 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9ld9" event={"ID":"c0b60d1b-dd62-4e93-ae49-9dee3bcca248","Type":"ContainerDied","Data":"e56a5c33f25bf0286227f65792ac536a7e4033531c01a8c4f7614ca61d9287ff"} Feb 19 13:20:48 crc kubenswrapper[4833]: I0219 13:20:48.154129 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9ld9" event={"ID":"c0b60d1b-dd62-4e93-ae49-9dee3bcca248","Type":"ContainerStarted","Data":"4d51f980e24b957ccca8b633469dad28d2314b841fa6ffae5949cd4bb32d501c"} Feb 19 13:20:49 crc kubenswrapper[4833]: I0219 13:20:49.164551 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9ld9" event={"ID":"c0b60d1b-dd62-4e93-ae49-9dee3bcca248","Type":"ContainerStarted","Data":"99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba"} Feb 19 13:20:49 crc kubenswrapper[4833]: I0219 13:20:49.168864 4833 generic.go:334] "Generic (PLEG): container finished" podID="289fe56b-8980-40a4-80e3-d53066ee2ac3" containerID="9961ccacf426af17e8de4ed4c2afe4767081d615d98dc732ee43505c50a9d05b" exitCode=0 Feb 19 13:20:49 crc kubenswrapper[4833]: I0219 13:20:49.169194 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg8qb" event={"ID":"289fe56b-8980-40a4-80e3-d53066ee2ac3","Type":"ContainerDied","Data":"9961ccacf426af17e8de4ed4c2afe4767081d615d98dc732ee43505c50a9d05b"} Feb 19 13:20:49 crc kubenswrapper[4833]: I0219 13:20:49.171344 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:20:50 crc kubenswrapper[4833]: I0219 13:20:50.180894 4833 generic.go:334] "Generic (PLEG): container finished" podID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" containerID="99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba" exitCode=0 Feb 19 13:20:50 crc kubenswrapper[4833]: I0219 13:20:50.180953 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9ld9" event={"ID":"c0b60d1b-dd62-4e93-ae49-9dee3bcca248","Type":"ContainerDied","Data":"99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba"} Feb 19 13:20:51 crc kubenswrapper[4833]: I0219 13:20:51.190382 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg8qb" event={"ID":"289fe56b-8980-40a4-80e3-d53066ee2ac3","Type":"ContainerStarted","Data":"5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d"} Feb 19 13:20:51 crc kubenswrapper[4833]: I0219 13:20:51.215394 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dg8qb" podStartSLOduration=3.263426952 podStartE2EDuration="6.215376872s" podCreationTimestamp="2026-02-19 13:20:45 +0000 UTC" firstStartedPulling="2026-02-19 13:20:47.148643356 +0000 UTC m=+2057.544162124" lastFinishedPulling="2026-02-19 13:20:50.100593276 +0000 UTC m=+2060.496112044" observedRunningTime="2026-02-19 13:20:51.209135198 +0000 UTC m=+2061.604653966" watchObservedRunningTime="2026-02-19 13:20:51.215376872 +0000 UTC m=+2061.610895640" Feb 19 13:20:52 crc kubenswrapper[4833]: I0219 13:20:52.198894 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9ld9" event={"ID":"c0b60d1b-dd62-4e93-ae49-9dee3bcca248","Type":"ContainerStarted","Data":"62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc"} Feb 19 13:20:52 crc kubenswrapper[4833]: I0219 13:20:52.223445 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t9ld9" podStartSLOduration=3.629458638 podStartE2EDuration="6.223426058s" podCreationTimestamp="2026-02-19 13:20:46 +0000 UTC" firstStartedPulling="2026-02-19 13:20:48.158764187 +0000 UTC m=+2058.554282955" lastFinishedPulling="2026-02-19 13:20:50.752731597 +0000 UTC m=+2061.148250375" observedRunningTime="2026-02-19 13:20:52.216039565 +0000 UTC m=+2062.611558333" watchObservedRunningTime="2026-02-19 13:20:52.223426058 +0000 UTC m=+2062.618944816" Feb 19 13:20:56 crc kubenswrapper[4833]: I0219 13:20:56.288053 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:56 crc kubenswrapper[4833]: I0219 13:20:56.289131 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:56 crc kubenswrapper[4833]: I0219 13:20:56.345931 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:57 crc kubenswrapper[4833]: I0219 13:20:57.279130 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:57 crc kubenswrapper[4833]: I0219 13:20:57.279839 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:57 crc kubenswrapper[4833]: I0219 13:20:57.331189 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:57 crc kubenswrapper[4833]: I0219 13:20:57.352537 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:57 crc kubenswrapper[4833]: I0219 13:20:57.397380 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dg8qb"] Feb 19 13:20:58 crc kubenswrapper[4833]: I0219 13:20:58.342257 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:20:59 crc kubenswrapper[4833]: I0219 13:20:59.149873 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t9ld9"] Feb 19 13:20:59 crc kubenswrapper[4833]: I0219 13:20:59.275724 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dg8qb" podUID="289fe56b-8980-40a4-80e3-d53066ee2ac3" containerName="registry-server" containerID="cri-o://5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d" gracePeriod=2 Feb 19 13:20:59 crc kubenswrapper[4833]: I0219 13:20:59.798049 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:20:59 crc kubenswrapper[4833]: I0219 13:20:59.889330 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn9sx\" (UniqueName: \"kubernetes.io/projected/289fe56b-8980-40a4-80e3-d53066ee2ac3-kube-api-access-qn9sx\") pod \"289fe56b-8980-40a4-80e3-d53066ee2ac3\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " Feb 19 13:20:59 crc kubenswrapper[4833]: I0219 13:20:59.889472 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-catalog-content\") pod \"289fe56b-8980-40a4-80e3-d53066ee2ac3\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " Feb 19 13:20:59 crc kubenswrapper[4833]: I0219 13:20:59.889737 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-utilities\") pod \"289fe56b-8980-40a4-80e3-d53066ee2ac3\" (UID: \"289fe56b-8980-40a4-80e3-d53066ee2ac3\") " Feb 19 13:20:59 crc kubenswrapper[4833]: I0219 13:20:59.890483 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-utilities" (OuterVolumeSpecName: "utilities") pod "289fe56b-8980-40a4-80e3-d53066ee2ac3" (UID: "289fe56b-8980-40a4-80e3-d53066ee2ac3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:20:59 crc kubenswrapper[4833]: I0219 13:20:59.895969 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289fe56b-8980-40a4-80e3-d53066ee2ac3-kube-api-access-qn9sx" (OuterVolumeSpecName: "kube-api-access-qn9sx") pod "289fe56b-8980-40a4-80e3-d53066ee2ac3" (UID: "289fe56b-8980-40a4-80e3-d53066ee2ac3"). InnerVolumeSpecName "kube-api-access-qn9sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:20:59 crc kubenswrapper[4833]: I0219 13:20:59.992002 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn9sx\" (UniqueName: \"kubernetes.io/projected/289fe56b-8980-40a4-80e3-d53066ee2ac3-kube-api-access-qn9sx\") on node \"crc\" DevicePath \"\"" Feb 19 13:20:59 crc kubenswrapper[4833]: I0219 13:20:59.992047 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.018604 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "289fe56b-8980-40a4-80e3-d53066ee2ac3" (UID: "289fe56b-8980-40a4-80e3-d53066ee2ac3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.093689 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/289fe56b-8980-40a4-80e3-d53066ee2ac3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.294726 4833 generic.go:334] "Generic (PLEG): container finished" podID="289fe56b-8980-40a4-80e3-d53066ee2ac3" containerID="5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d" exitCode=0 Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.294805 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dg8qb" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.294834 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg8qb" event={"ID":"289fe56b-8980-40a4-80e3-d53066ee2ac3","Type":"ContainerDied","Data":"5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d"} Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.294900 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dg8qb" event={"ID":"289fe56b-8980-40a4-80e3-d53066ee2ac3","Type":"ContainerDied","Data":"05b6d71123dfbecfe0fb6a8cd6b5c24658b71ca957104579958f24cd9bf4dc49"} Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.294925 4833 scope.go:117] "RemoveContainer" containerID="5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.295198 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t9ld9" podUID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" containerName="registry-server" containerID="cri-o://62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc" gracePeriod=2 Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.338810 4833 scope.go:117] "RemoveContainer" containerID="9961ccacf426af17e8de4ed4c2afe4767081d615d98dc732ee43505c50a9d05b" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.353617 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dg8qb"] Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.362790 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dg8qb"] Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.366465 4833 scope.go:117] "RemoveContainer" containerID="ef7c66ffccba855313fa598810ce111bc8764422ee6fe9d77e51ebdee9d52d25" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.534407 4833 scope.go:117] "RemoveContainer" containerID="5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d" Feb 19 13:21:00 crc kubenswrapper[4833]: E0219 13:21:00.535033 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d\": container with ID starting with 5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d not found: ID does not exist" containerID="5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.535074 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d"} err="failed to get container status \"5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d\": rpc error: code = NotFound desc = could not find container \"5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d\": container with ID starting with 5db988005cdbf6af99c90ca05452620b2ea82fca481b2389d304a54056a4b37d not found: ID does not exist" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.535100 4833 scope.go:117] "RemoveContainer" containerID="9961ccacf426af17e8de4ed4c2afe4767081d615d98dc732ee43505c50a9d05b" Feb 19 13:21:00 crc kubenswrapper[4833]: E0219 13:21:00.535372 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9961ccacf426af17e8de4ed4c2afe4767081d615d98dc732ee43505c50a9d05b\": container with ID starting with 9961ccacf426af17e8de4ed4c2afe4767081d615d98dc732ee43505c50a9d05b not found: ID does not exist" containerID="9961ccacf426af17e8de4ed4c2afe4767081d615d98dc732ee43505c50a9d05b" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.535426 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9961ccacf426af17e8de4ed4c2afe4767081d615d98dc732ee43505c50a9d05b"} err="failed to get container status \"9961ccacf426af17e8de4ed4c2afe4767081d615d98dc732ee43505c50a9d05b\": rpc error: code = NotFound desc = could not find container \"9961ccacf426af17e8de4ed4c2afe4767081d615d98dc732ee43505c50a9d05b\": container with ID starting with 9961ccacf426af17e8de4ed4c2afe4767081d615d98dc732ee43505c50a9d05b not found: ID does not exist" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.535444 4833 scope.go:117] "RemoveContainer" containerID="ef7c66ffccba855313fa598810ce111bc8764422ee6fe9d77e51ebdee9d52d25" Feb 19 13:21:00 crc kubenswrapper[4833]: E0219 13:21:00.535993 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7c66ffccba855313fa598810ce111bc8764422ee6fe9d77e51ebdee9d52d25\": container with ID starting with ef7c66ffccba855313fa598810ce111bc8764422ee6fe9d77e51ebdee9d52d25 not found: ID does not exist" containerID="ef7c66ffccba855313fa598810ce111bc8764422ee6fe9d77e51ebdee9d52d25" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.536019 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7c66ffccba855313fa598810ce111bc8764422ee6fe9d77e51ebdee9d52d25"} err="failed to get container status \"ef7c66ffccba855313fa598810ce111bc8764422ee6fe9d77e51ebdee9d52d25\": rpc error: code = NotFound desc = could not find container \"ef7c66ffccba855313fa598810ce111bc8764422ee6fe9d77e51ebdee9d52d25\": container with ID starting with ef7c66ffccba855313fa598810ce111bc8764422ee6fe9d77e51ebdee9d52d25 not found: ID does not exist" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.768285 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.906297 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-catalog-content\") pod \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.906424 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-utilities\") pod \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.906543 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2f8k\" (UniqueName: \"kubernetes.io/projected/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-kube-api-access-m2f8k\") pod \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\" (UID: \"c0b60d1b-dd62-4e93-ae49-9dee3bcca248\") " Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.907362 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-utilities" (OuterVolumeSpecName: "utilities") pod "c0b60d1b-dd62-4e93-ae49-9dee3bcca248" (UID: "c0b60d1b-dd62-4e93-ae49-9dee3bcca248"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.911328 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-kube-api-access-m2f8k" (OuterVolumeSpecName: "kube-api-access-m2f8k") pod "c0b60d1b-dd62-4e93-ae49-9dee3bcca248" (UID: "c0b60d1b-dd62-4e93-ae49-9dee3bcca248"). InnerVolumeSpecName "kube-api-access-m2f8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:21:00 crc kubenswrapper[4833]: I0219 13:21:00.966278 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0b60d1b-dd62-4e93-ae49-9dee3bcca248" (UID: "c0b60d1b-dd62-4e93-ae49-9dee3bcca248"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.009488 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.009537 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.009562 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2f8k\" (UniqueName: \"kubernetes.io/projected/c0b60d1b-dd62-4e93-ae49-9dee3bcca248-kube-api-access-m2f8k\") on node \"crc\" DevicePath \"\"" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.305243 4833 generic.go:334] "Generic (PLEG): container finished" podID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" containerID="62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc" exitCode=0 Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.305303 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9ld9" event={"ID":"c0b60d1b-dd62-4e93-ae49-9dee3bcca248","Type":"ContainerDied","Data":"62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc"} Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.305330 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t9ld9" event={"ID":"c0b60d1b-dd62-4e93-ae49-9dee3bcca248","Type":"ContainerDied","Data":"4d51f980e24b957ccca8b633469dad28d2314b841fa6ffae5949cd4bb32d501c"} Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.305346 4833 scope.go:117] "RemoveContainer" containerID="62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.305464 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t9ld9" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.346250 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t9ld9"] Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.348024 4833 scope.go:117] "RemoveContainer" containerID="99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.361867 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t9ld9"] Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.377516 4833 scope.go:117] "RemoveContainer" containerID="e56a5c33f25bf0286227f65792ac536a7e4033531c01a8c4f7614ca61d9287ff" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.396347 4833 scope.go:117] "RemoveContainer" containerID="62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc" Feb 19 13:21:01 crc kubenswrapper[4833]: E0219 13:21:01.396834 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc\": container with ID starting with 62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc not found: ID does not exist" containerID="62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.396865 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc"} err="failed to get container status \"62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc\": rpc error: code = NotFound desc = could not find container \"62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc\": container with ID starting with 62b463ccb9ccda2e4dbdd83b23eaadc41d2f5704b3e73565a465e72d77cf40fc not found: ID does not exist" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.396886 4833 scope.go:117] "RemoveContainer" containerID="99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba" Feb 19 13:21:01 crc kubenswrapper[4833]: E0219 13:21:01.397483 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba\": container with ID starting with 99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba not found: ID does not exist" containerID="99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.397556 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba"} err="failed to get container status \"99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba\": rpc error: code = NotFound desc = could not find container \"99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba\": container with ID starting with 99e4c45c83f639d714a5c11f27b7518e9dc9626c4a3ff05e531c993f3e3e5bba not found: ID does not exist" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.397570 4833 scope.go:117] "RemoveContainer" containerID="e56a5c33f25bf0286227f65792ac536a7e4033531c01a8c4f7614ca61d9287ff" Feb 19 13:21:01 crc kubenswrapper[4833]: E0219 13:21:01.397776 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e56a5c33f25bf0286227f65792ac536a7e4033531c01a8c4f7614ca61d9287ff\": container with ID starting with e56a5c33f25bf0286227f65792ac536a7e4033531c01a8c4f7614ca61d9287ff not found: ID does not exist" containerID="e56a5c33f25bf0286227f65792ac536a7e4033531c01a8c4f7614ca61d9287ff" Feb 19 13:21:01 crc kubenswrapper[4833]: I0219 13:21:01.397806 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e56a5c33f25bf0286227f65792ac536a7e4033531c01a8c4f7614ca61d9287ff"} err="failed to get container status \"e56a5c33f25bf0286227f65792ac536a7e4033531c01a8c4f7614ca61d9287ff\": rpc error: code = NotFound desc = could not find container \"e56a5c33f25bf0286227f65792ac536a7e4033531c01a8c4f7614ca61d9287ff\": container with ID starting with e56a5c33f25bf0286227f65792ac536a7e4033531c01a8c4f7614ca61d9287ff not found: ID does not exist" Feb 19 13:21:02 crc kubenswrapper[4833]: I0219 13:21:02.328453 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289fe56b-8980-40a4-80e3-d53066ee2ac3" path="/var/lib/kubelet/pods/289fe56b-8980-40a4-80e3-d53066ee2ac3/volumes" Feb 19 13:21:02 crc kubenswrapper[4833]: I0219 13:21:02.329788 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" path="/var/lib/kubelet/pods/c0b60d1b-dd62-4e93-ae49-9dee3bcca248/volumes" Feb 19 13:21:15 crc kubenswrapper[4833]: I0219 13:21:15.744297 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:21:15 crc kubenswrapper[4833]: I0219 13:21:15.745123 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:21:45 crc kubenswrapper[4833]: I0219 13:21:45.745844 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:21:45 crc kubenswrapper[4833]: I0219 13:21:45.746631 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:22:15 crc kubenswrapper[4833]: I0219 13:22:15.744418 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:22:15 crc kubenswrapper[4833]: I0219 13:22:15.745085 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:22:15 crc kubenswrapper[4833]: I0219 13:22:15.745134 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 13:22:15 crc kubenswrapper[4833]: I0219 13:22:15.745768 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9fc9b0a249b27484ad54a5d5192986aa294d70a00b6a6cd42401166586473b95"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:22:15 crc kubenswrapper[4833]: I0219 13:22:15.745828 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://9fc9b0a249b27484ad54a5d5192986aa294d70a00b6a6cd42401166586473b95" gracePeriod=600 Feb 19 13:22:16 crc kubenswrapper[4833]: I0219 13:22:16.077043 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="9fc9b0a249b27484ad54a5d5192986aa294d70a00b6a6cd42401166586473b95" exitCode=0 Feb 19 13:22:16 crc kubenswrapper[4833]: I0219 13:22:16.077159 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"9fc9b0a249b27484ad54a5d5192986aa294d70a00b6a6cd42401166586473b95"} Feb 19 13:22:16 crc kubenswrapper[4833]: I0219 13:22:16.077433 4833 scope.go:117] "RemoveContainer" containerID="68af062ad026f894823c5275509a3a85a3d7b9b44d6ca2d938db284880905483" Feb 19 13:22:17 crc kubenswrapper[4833]: I0219 13:22:17.088752 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799"} Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.378378 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mpfbp"] Feb 19 13:23:38 crc kubenswrapper[4833]: E0219 13:23:38.379519 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" containerName="registry-server" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.379536 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" containerName="registry-server" Feb 19 13:23:38 crc kubenswrapper[4833]: E0219 13:23:38.379560 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289fe56b-8980-40a4-80e3-d53066ee2ac3" containerName="registry-server" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.379568 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="289fe56b-8980-40a4-80e3-d53066ee2ac3" containerName="registry-server" Feb 19 13:23:38 crc kubenswrapper[4833]: E0219 13:23:38.379585 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" containerName="extract-utilities" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.379594 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" containerName="extract-utilities" Feb 19 13:23:38 crc kubenswrapper[4833]: E0219 13:23:38.379613 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289fe56b-8980-40a4-80e3-d53066ee2ac3" containerName="extract-utilities" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.379621 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="289fe56b-8980-40a4-80e3-d53066ee2ac3" containerName="extract-utilities" Feb 19 13:23:38 crc kubenswrapper[4833]: E0219 13:23:38.379642 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" containerName="extract-content" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.379650 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" containerName="extract-content" Feb 19 13:23:38 crc kubenswrapper[4833]: E0219 13:23:38.379665 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289fe56b-8980-40a4-80e3-d53066ee2ac3" containerName="extract-content" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.379673 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="289fe56b-8980-40a4-80e3-d53066ee2ac3" containerName="extract-content" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.379888 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="289fe56b-8980-40a4-80e3-d53066ee2ac3" containerName="registry-server" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.379920 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b60d1b-dd62-4e93-ae49-9dee3bcca248" containerName="registry-server" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.386814 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.405809 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpfbp"] Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.501180 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-catalog-content\") pod \"certified-operators-mpfbp\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.501238 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-utilities\") pod \"certified-operators-mpfbp\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.501516 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r42c6\" (UniqueName: \"kubernetes.io/projected/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-kube-api-access-r42c6\") pod \"certified-operators-mpfbp\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.603183 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r42c6\" (UniqueName: \"kubernetes.io/projected/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-kube-api-access-r42c6\") pod \"certified-operators-mpfbp\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.603301 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-catalog-content\") pod \"certified-operators-mpfbp\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.603341 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-utilities\") pod \"certified-operators-mpfbp\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.603833 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-catalog-content\") pod \"certified-operators-mpfbp\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.603883 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-utilities\") pod \"certified-operators-mpfbp\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.622205 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r42c6\" (UniqueName: \"kubernetes.io/projected/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-kube-api-access-r42c6\") pod \"certified-operators-mpfbp\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:38 crc kubenswrapper[4833]: I0219 13:23:38.709891 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:39 crc kubenswrapper[4833]: I0219 13:23:39.241771 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpfbp"] Feb 19 13:23:39 crc kubenswrapper[4833]: I0219 13:23:39.955371 4833 generic.go:334] "Generic (PLEG): container finished" podID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" containerID="277532aa9c2bdd04763212d05f420d3462a126baac206fdff86ef7fc8b010ab2" exitCode=0 Feb 19 13:23:39 crc kubenswrapper[4833]: I0219 13:23:39.955526 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfbp" event={"ID":"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7","Type":"ContainerDied","Data":"277532aa9c2bdd04763212d05f420d3462a126baac206fdff86ef7fc8b010ab2"} Feb 19 13:23:39 crc kubenswrapper[4833]: I0219 13:23:39.955709 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfbp" event={"ID":"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7","Type":"ContainerStarted","Data":"03c6b5a20de73e4204039f59d7ddd3ca4aab4fc8f6f7c8c0cb25a33bdafb1862"} Feb 19 13:23:41 crc kubenswrapper[4833]: I0219 13:23:41.976373 4833 generic.go:334] "Generic (PLEG): container finished" podID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" containerID="3b12a53ebe875e17669eaf5d43d29282d2b9e593702a95f8c73d642086bea4df" exitCode=0 Feb 19 13:23:41 crc kubenswrapper[4833]: I0219 13:23:41.976425 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfbp" event={"ID":"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7","Type":"ContainerDied","Data":"3b12a53ebe875e17669eaf5d43d29282d2b9e593702a95f8c73d642086bea4df"} Feb 19 13:23:42 crc kubenswrapper[4833]: I0219 13:23:42.989695 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfbp" event={"ID":"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7","Type":"ContainerStarted","Data":"1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061"} Feb 19 13:23:43 crc kubenswrapper[4833]: I0219 13:23:43.026046 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mpfbp" podStartSLOduration=2.594420105 podStartE2EDuration="5.025939063s" podCreationTimestamp="2026-02-19 13:23:38 +0000 UTC" firstStartedPulling="2026-02-19 13:23:39.9583666 +0000 UTC m=+2230.353885368" lastFinishedPulling="2026-02-19 13:23:42.389885558 +0000 UTC m=+2232.785404326" observedRunningTime="2026-02-19 13:23:43.009234513 +0000 UTC m=+2233.404753291" watchObservedRunningTime="2026-02-19 13:23:43.025939063 +0000 UTC m=+2233.421457841" Feb 19 13:23:48 crc kubenswrapper[4833]: I0219 13:23:48.710346 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:48 crc kubenswrapper[4833]: I0219 13:23:48.712600 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:48 crc kubenswrapper[4833]: I0219 13:23:48.773895 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:49 crc kubenswrapper[4833]: I0219 13:23:49.112052 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:49 crc kubenswrapper[4833]: I0219 13:23:49.168632 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mpfbp"] Feb 19 13:23:51 crc kubenswrapper[4833]: I0219 13:23:51.076412 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mpfbp" podUID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" containerName="registry-server" containerID="cri-o://1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061" gracePeriod=2 Feb 19 13:23:51 crc kubenswrapper[4833]: I0219 13:23:51.564575 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:51 crc kubenswrapper[4833]: I0219 13:23:51.732319 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r42c6\" (UniqueName: \"kubernetes.io/projected/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-kube-api-access-r42c6\") pod \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " Feb 19 13:23:51 crc kubenswrapper[4833]: I0219 13:23:51.732896 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-utilities\") pod \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " Feb 19 13:23:51 crc kubenswrapper[4833]: I0219 13:23:51.732997 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-catalog-content\") pod \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\" (UID: \"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7\") " Feb 19 13:23:51 crc kubenswrapper[4833]: I0219 13:23:51.734308 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-utilities" (OuterVolumeSpecName: "utilities") pod "bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" (UID: "bbedb47d-5b9d-4f7e-b9f9-c79538c863a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:23:51 crc kubenswrapper[4833]: I0219 13:23:51.743015 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-kube-api-access-r42c6" (OuterVolumeSpecName: "kube-api-access-r42c6") pod "bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" (UID: "bbedb47d-5b9d-4f7e-b9f9-c79538c863a7"). InnerVolumeSpecName "kube-api-access-r42c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:23:51 crc kubenswrapper[4833]: I0219 13:23:51.807031 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" (UID: "bbedb47d-5b9d-4f7e-b9f9-c79538c863a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:23:51 crc kubenswrapper[4833]: I0219 13:23:51.836162 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r42c6\" (UniqueName: \"kubernetes.io/projected/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-kube-api-access-r42c6\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:51 crc kubenswrapper[4833]: I0219 13:23:51.836215 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:51 crc kubenswrapper[4833]: I0219 13:23:51.836236 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.088569 4833 generic.go:334] "Generic (PLEG): container finished" podID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" containerID="1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061" exitCode=0 Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.088627 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpfbp" Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.088633 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfbp" event={"ID":"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7","Type":"ContainerDied","Data":"1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061"} Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.088689 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpfbp" event={"ID":"bbedb47d-5b9d-4f7e-b9f9-c79538c863a7","Type":"ContainerDied","Data":"03c6b5a20de73e4204039f59d7ddd3ca4aab4fc8f6f7c8c0cb25a33bdafb1862"} Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.088715 4833 scope.go:117] "RemoveContainer" containerID="1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061" Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.130725 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mpfbp"] Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.133848 4833 scope.go:117] "RemoveContainer" containerID="3b12a53ebe875e17669eaf5d43d29282d2b9e593702a95f8c73d642086bea4df" Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.140441 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mpfbp"] Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.167247 4833 scope.go:117] "RemoveContainer" containerID="277532aa9c2bdd04763212d05f420d3462a126baac206fdff86ef7fc8b010ab2" Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.228880 4833 scope.go:117] "RemoveContainer" containerID="1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061" Feb 19 13:23:52 crc kubenswrapper[4833]: E0219 13:23:52.229914 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061\": container with ID starting with 1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061 not found: ID does not exist" containerID="1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061" Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.229967 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061"} err="failed to get container status \"1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061\": rpc error: code = NotFound desc = could not find container \"1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061\": container with ID starting with 1adebb84bfe63acf98a7aaa972488338d62e16808671dc29b28644e1875f7061 not found: ID does not exist" Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.230002 4833 scope.go:117] "RemoveContainer" containerID="3b12a53ebe875e17669eaf5d43d29282d2b9e593702a95f8c73d642086bea4df" Feb 19 13:23:52 crc kubenswrapper[4833]: E0219 13:23:52.230450 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b12a53ebe875e17669eaf5d43d29282d2b9e593702a95f8c73d642086bea4df\": container with ID starting with 3b12a53ebe875e17669eaf5d43d29282d2b9e593702a95f8c73d642086bea4df not found: ID does not exist" containerID="3b12a53ebe875e17669eaf5d43d29282d2b9e593702a95f8c73d642086bea4df" Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.230488 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b12a53ebe875e17669eaf5d43d29282d2b9e593702a95f8c73d642086bea4df"} err="failed to get container status \"3b12a53ebe875e17669eaf5d43d29282d2b9e593702a95f8c73d642086bea4df\": rpc error: code = NotFound desc = could not find container \"3b12a53ebe875e17669eaf5d43d29282d2b9e593702a95f8c73d642086bea4df\": container with ID starting with 3b12a53ebe875e17669eaf5d43d29282d2b9e593702a95f8c73d642086bea4df not found: ID does not exist" Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.230538 4833 scope.go:117] "RemoveContainer" containerID="277532aa9c2bdd04763212d05f420d3462a126baac206fdff86ef7fc8b010ab2" Feb 19 13:23:52 crc kubenswrapper[4833]: E0219 13:23:52.230878 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"277532aa9c2bdd04763212d05f420d3462a126baac206fdff86ef7fc8b010ab2\": container with ID starting with 277532aa9c2bdd04763212d05f420d3462a126baac206fdff86ef7fc8b010ab2 not found: ID does not exist" containerID="277532aa9c2bdd04763212d05f420d3462a126baac206fdff86ef7fc8b010ab2" Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.230918 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"277532aa9c2bdd04763212d05f420d3462a126baac206fdff86ef7fc8b010ab2"} err="failed to get container status \"277532aa9c2bdd04763212d05f420d3462a126baac206fdff86ef7fc8b010ab2\": rpc error: code = NotFound desc = could not find container \"277532aa9c2bdd04763212d05f420d3462a126baac206fdff86ef7fc8b010ab2\": container with ID starting with 277532aa9c2bdd04763212d05f420d3462a126baac206fdff86ef7fc8b010ab2 not found: ID does not exist" Feb 19 13:23:52 crc kubenswrapper[4833]: I0219 13:23:52.326366 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" path="/var/lib/kubelet/pods/bbedb47d-5b9d-4f7e-b9f9-c79538c863a7/volumes" Feb 19 13:24:31 crc kubenswrapper[4833]: I0219 13:24:31.519430 4833 generic.go:334] "Generic (PLEG): container finished" podID="c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0" containerID="3cd75de870cdf5f700a660229f0c0b2cf551494e190674b6baab175c48e9ba6c" exitCode=0 Feb 19 13:24:31 crc kubenswrapper[4833]: I0219 13:24:31.519554 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" event={"ID":"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0","Type":"ContainerDied","Data":"3cd75de870cdf5f700a660229f0c0b2cf551494e190674b6baab175c48e9ba6c"} Feb 19 13:24:32 crc kubenswrapper[4833]: I0219 13:24:32.933090 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.072008 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-secret-0\") pod \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.072098 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-combined-ca-bundle\") pod \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.072165 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-ssh-key-openstack-edpm-ipam\") pod \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.072226 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x9xn\" (UniqueName: \"kubernetes.io/projected/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-kube-api-access-8x9xn\") pod \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.072247 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-inventory\") pod \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\" (UID: \"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0\") " Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.085470 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0" (UID: "c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.098744 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-kube-api-access-8x9xn" (OuterVolumeSpecName: "kube-api-access-8x9xn") pod "c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0" (UID: "c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0"). InnerVolumeSpecName "kube-api-access-8x9xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.123195 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0" (UID: "c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.125361 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0" (UID: "c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.137296 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-inventory" (OuterVolumeSpecName: "inventory") pod "c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0" (UID: "c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.178475 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.178531 4833 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.178549 4833 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.178563 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.178575 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x9xn\" (UniqueName: \"kubernetes.io/projected/c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0-kube-api-access-8x9xn\") on node \"crc\" DevicePath \"\"" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.547262 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" event={"ID":"c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0","Type":"ContainerDied","Data":"f3a5715b7140705f896ae4f066ecaa39c4e0512cfb7cf1f6cc78da7ec4776d2a"} Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.547296 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3a5715b7140705f896ae4f066ecaa39c4e0512cfb7cf1f6cc78da7ec4776d2a" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.547335 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.661368 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c"] Feb 19 13:24:33 crc kubenswrapper[4833]: E0219 13:24:33.661935 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" containerName="registry-server" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.661966 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" containerName="registry-server" Feb 19 13:24:33 crc kubenswrapper[4833]: E0219 13:24:33.661993 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" containerName="extract-utilities" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.662005 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" containerName="extract-utilities" Feb 19 13:24:33 crc kubenswrapper[4833]: E0219 13:24:33.662042 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.662057 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 13:24:33 crc kubenswrapper[4833]: E0219 13:24:33.662082 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" containerName="extract-content" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.662107 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" containerName="extract-content" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.662417 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.662474 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbedb47d-5b9d-4f7e-b9f9-c79538c863a7" containerName="registry-server" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.663326 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.665396 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.665862 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.666052 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.666175 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.666279 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.667725 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.669152 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.673132 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c"] Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.791676 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.792149 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.792187 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.792208 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.792233 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.792278 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.792322 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.792338 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.792363 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.792386 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5pt\" (UniqueName: \"kubernetes.io/projected/cf0f1512-542b-4358-b74b-57df19d9c7d3-kube-api-access-zl5pt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.792408 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.894863 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.894993 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.895082 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.895200 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.895326 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.895371 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.895430 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.895474 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5pt\" (UniqueName: \"kubernetes.io/projected/cf0f1512-542b-4358-b74b-57df19d9c7d3-kube-api-access-zl5pt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.895579 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.895668 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.895782 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.897159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.900004 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.900934 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.901350 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.901578 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.901711 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.902445 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.902981 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.909564 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.910192 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.918588 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5pt\" (UniqueName: \"kubernetes.io/projected/cf0f1512-542b-4358-b74b-57df19d9c7d3-kube-api-access-zl5pt\") pod \"nova-edpm-deployment-openstack-edpm-ipam-rk95c\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:33 crc kubenswrapper[4833]: I0219 13:24:33.989096 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:24:34 crc kubenswrapper[4833]: I0219 13:24:34.576375 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c"] Feb 19 13:24:35 crc kubenswrapper[4833]: I0219 13:24:35.564163 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" event={"ID":"cf0f1512-542b-4358-b74b-57df19d9c7d3","Type":"ContainerStarted","Data":"593344c38688080d8f06f661752fcde4b98b1d646ef9ed2d93eb3f40dac777d0"} Feb 19 13:24:35 crc kubenswrapper[4833]: I0219 13:24:35.564216 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" event={"ID":"cf0f1512-542b-4358-b74b-57df19d9c7d3","Type":"ContainerStarted","Data":"19180bc01c7c80787c2da98ef0dfd0a5d7da1c80c0ec80da138d9e0b9a3e57f8"} Feb 19 13:24:45 crc kubenswrapper[4833]: I0219 13:24:45.745267 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:24:45 crc kubenswrapper[4833]: I0219 13:24:45.746210 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:25:15 crc kubenswrapper[4833]: I0219 13:25:15.744340 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:25:15 crc kubenswrapper[4833]: I0219 13:25:15.745200 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.809656 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" podStartSLOduration=56.315885037 podStartE2EDuration="56.809616113s" podCreationTimestamp="2026-02-19 13:24:33 +0000 UTC" firstStartedPulling="2026-02-19 13:24:34.582732215 +0000 UTC m=+2284.978250983" lastFinishedPulling="2026-02-19 13:24:35.076463251 +0000 UTC m=+2285.471982059" observedRunningTime="2026-02-19 13:24:35.596678484 +0000 UTC m=+2285.992197252" watchObservedRunningTime="2026-02-19 13:25:29.809616113 +0000 UTC m=+2340.205134941" Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.834406 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hlc6g"] Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.838951 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.860024 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-catalog-content\") pod \"redhat-marketplace-hlc6g\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.860400 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-utilities\") pod \"redhat-marketplace-hlc6g\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.860574 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvzj4\" (UniqueName: \"kubernetes.io/projected/f31b85a4-7f45-470d-aa45-3ab9f53b0114-kube-api-access-nvzj4\") pod \"redhat-marketplace-hlc6g\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.869080 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlc6g"] Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.962149 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-catalog-content\") pod \"redhat-marketplace-hlc6g\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.962213 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-utilities\") pod \"redhat-marketplace-hlc6g\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.962246 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvzj4\" (UniqueName: \"kubernetes.io/projected/f31b85a4-7f45-470d-aa45-3ab9f53b0114-kube-api-access-nvzj4\") pod \"redhat-marketplace-hlc6g\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.962779 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-catalog-content\") pod \"redhat-marketplace-hlc6g\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.962836 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-utilities\") pod \"redhat-marketplace-hlc6g\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:29 crc kubenswrapper[4833]: I0219 13:25:29.987148 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvzj4\" (UniqueName: \"kubernetes.io/projected/f31b85a4-7f45-470d-aa45-3ab9f53b0114-kube-api-access-nvzj4\") pod \"redhat-marketplace-hlc6g\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:30 crc kubenswrapper[4833]: I0219 13:25:30.176383 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:30 crc kubenswrapper[4833]: I0219 13:25:30.685099 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlc6g"] Feb 19 13:25:31 crc kubenswrapper[4833]: I0219 13:25:31.137355 4833 generic.go:334] "Generic (PLEG): container finished" podID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" containerID="f4016a3c9659620b8d991ec6589c1ac40f0a694e8b53a3bf5c880e70cb6e9560" exitCode=0 Feb 19 13:25:31 crc kubenswrapper[4833]: I0219 13:25:31.137400 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlc6g" event={"ID":"f31b85a4-7f45-470d-aa45-3ab9f53b0114","Type":"ContainerDied","Data":"f4016a3c9659620b8d991ec6589c1ac40f0a694e8b53a3bf5c880e70cb6e9560"} Feb 19 13:25:31 crc kubenswrapper[4833]: I0219 13:25:31.139931 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlc6g" event={"ID":"f31b85a4-7f45-470d-aa45-3ab9f53b0114","Type":"ContainerStarted","Data":"4aa82350d8936f3cf0d6105752a48a2fe2335063370ff36bcc30674144041968"} Feb 19 13:25:32 crc kubenswrapper[4833]: I0219 13:25:32.149783 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlc6g" event={"ID":"f31b85a4-7f45-470d-aa45-3ab9f53b0114","Type":"ContainerStarted","Data":"ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628"} Feb 19 13:25:33 crc kubenswrapper[4833]: I0219 13:25:33.168822 4833 generic.go:334] "Generic (PLEG): container finished" podID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" containerID="ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628" exitCode=0 Feb 19 13:25:33 crc kubenswrapper[4833]: I0219 13:25:33.169643 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlc6g" event={"ID":"f31b85a4-7f45-470d-aa45-3ab9f53b0114","Type":"ContainerDied","Data":"ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628"} Feb 19 13:25:34 crc kubenswrapper[4833]: I0219 13:25:34.181012 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlc6g" event={"ID":"f31b85a4-7f45-470d-aa45-3ab9f53b0114","Type":"ContainerStarted","Data":"314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141"} Feb 19 13:25:34 crc kubenswrapper[4833]: I0219 13:25:34.206274 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hlc6g" podStartSLOduration=2.734025465 podStartE2EDuration="5.206251876s" podCreationTimestamp="2026-02-19 13:25:29 +0000 UTC" firstStartedPulling="2026-02-19 13:25:31.140831109 +0000 UTC m=+2341.536349877" lastFinishedPulling="2026-02-19 13:25:33.61305751 +0000 UTC m=+2344.008576288" observedRunningTime="2026-02-19 13:25:34.199436036 +0000 UTC m=+2344.594954824" watchObservedRunningTime="2026-02-19 13:25:34.206251876 +0000 UTC m=+2344.601770654" Feb 19 13:25:40 crc kubenswrapper[4833]: I0219 13:25:40.176490 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:40 crc kubenswrapper[4833]: I0219 13:25:40.177161 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:40 crc kubenswrapper[4833]: I0219 13:25:40.265293 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:40 crc kubenswrapper[4833]: I0219 13:25:40.337812 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:40 crc kubenswrapper[4833]: I0219 13:25:40.516202 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlc6g"] Feb 19 13:25:42 crc kubenswrapper[4833]: I0219 13:25:42.272189 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hlc6g" podUID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" containerName="registry-server" containerID="cri-o://314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141" gracePeriod=2 Feb 19 13:25:42 crc kubenswrapper[4833]: I0219 13:25:42.726233 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:42 crc kubenswrapper[4833]: I0219 13:25:42.881952 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-utilities\") pod \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " Feb 19 13:25:42 crc kubenswrapper[4833]: I0219 13:25:42.882389 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-catalog-content\") pod \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " Feb 19 13:25:42 crc kubenswrapper[4833]: I0219 13:25:42.882484 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvzj4\" (UniqueName: \"kubernetes.io/projected/f31b85a4-7f45-470d-aa45-3ab9f53b0114-kube-api-access-nvzj4\") pod \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\" (UID: \"f31b85a4-7f45-470d-aa45-3ab9f53b0114\") " Feb 19 13:25:42 crc kubenswrapper[4833]: I0219 13:25:42.884177 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-utilities" (OuterVolumeSpecName: "utilities") pod "f31b85a4-7f45-470d-aa45-3ab9f53b0114" (UID: "f31b85a4-7f45-470d-aa45-3ab9f53b0114"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:25:42 crc kubenswrapper[4833]: I0219 13:25:42.892146 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f31b85a4-7f45-470d-aa45-3ab9f53b0114-kube-api-access-nvzj4" (OuterVolumeSpecName: "kube-api-access-nvzj4") pod "f31b85a4-7f45-470d-aa45-3ab9f53b0114" (UID: "f31b85a4-7f45-470d-aa45-3ab9f53b0114"). InnerVolumeSpecName "kube-api-access-nvzj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:25:42 crc kubenswrapper[4833]: I0219 13:25:42.936219 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f31b85a4-7f45-470d-aa45-3ab9f53b0114" (UID: "f31b85a4-7f45-470d-aa45-3ab9f53b0114"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:25:42 crc kubenswrapper[4833]: I0219 13:25:42.985064 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:25:42 crc kubenswrapper[4833]: I0219 13:25:42.985123 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f31b85a4-7f45-470d-aa45-3ab9f53b0114-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:25:42 crc kubenswrapper[4833]: I0219 13:25:42.985148 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvzj4\" (UniqueName: \"kubernetes.io/projected/f31b85a4-7f45-470d-aa45-3ab9f53b0114-kube-api-access-nvzj4\") on node \"crc\" DevicePath \"\"" Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.300688 4833 generic.go:334] "Generic (PLEG): container finished" podID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" containerID="314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141" exitCode=0 Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.300750 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlc6g" event={"ID":"f31b85a4-7f45-470d-aa45-3ab9f53b0114","Type":"ContainerDied","Data":"314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141"} Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.300774 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlc6g" Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.300807 4833 scope.go:117] "RemoveContainer" containerID="314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141" Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.300789 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlc6g" event={"ID":"f31b85a4-7f45-470d-aa45-3ab9f53b0114","Type":"ContainerDied","Data":"4aa82350d8936f3cf0d6105752a48a2fe2335063370ff36bcc30674144041968"} Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.347012 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlc6g"] Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.351064 4833 scope.go:117] "RemoveContainer" containerID="ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628" Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.356871 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlc6g"] Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.384648 4833 scope.go:117] "RemoveContainer" containerID="f4016a3c9659620b8d991ec6589c1ac40f0a694e8b53a3bf5c880e70cb6e9560" Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.436290 4833 scope.go:117] "RemoveContainer" containerID="314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141" Feb 19 13:25:43 crc kubenswrapper[4833]: E0219 13:25:43.436879 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141\": container with ID starting with 314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141 not found: ID does not exist" containerID="314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141" Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.436935 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141"} err="failed to get container status \"314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141\": rpc error: code = NotFound desc = could not find container \"314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141\": container with ID starting with 314aa3269633b7d7c01fef172e6a52d98695ab9fc11d9f91a92ada608070c141 not found: ID does not exist" Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.436970 4833 scope.go:117] "RemoveContainer" containerID="ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628" Feb 19 13:25:43 crc kubenswrapper[4833]: E0219 13:25:43.437329 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628\": container with ID starting with ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628 not found: ID does not exist" containerID="ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628" Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.437392 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628"} err="failed to get container status \"ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628\": rpc error: code = NotFound desc = could not find container \"ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628\": container with ID starting with ce48725cc9923ef4cdd5f0a0a2f73a61cf1283106a3d1a236bed77e3df318628 not found: ID does not exist" Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.437421 4833 scope.go:117] "RemoveContainer" containerID="f4016a3c9659620b8d991ec6589c1ac40f0a694e8b53a3bf5c880e70cb6e9560" Feb 19 13:25:43 crc kubenswrapper[4833]: E0219 13:25:43.437909 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4016a3c9659620b8d991ec6589c1ac40f0a694e8b53a3bf5c880e70cb6e9560\": container with ID starting with f4016a3c9659620b8d991ec6589c1ac40f0a694e8b53a3bf5c880e70cb6e9560 not found: ID does not exist" containerID="f4016a3c9659620b8d991ec6589c1ac40f0a694e8b53a3bf5c880e70cb6e9560" Feb 19 13:25:43 crc kubenswrapper[4833]: I0219 13:25:43.437959 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4016a3c9659620b8d991ec6589c1ac40f0a694e8b53a3bf5c880e70cb6e9560"} err="failed to get container status \"f4016a3c9659620b8d991ec6589c1ac40f0a694e8b53a3bf5c880e70cb6e9560\": rpc error: code = NotFound desc = could not find container \"f4016a3c9659620b8d991ec6589c1ac40f0a694e8b53a3bf5c880e70cb6e9560\": container with ID starting with f4016a3c9659620b8d991ec6589c1ac40f0a694e8b53a3bf5c880e70cb6e9560 not found: ID does not exist" Feb 19 13:25:44 crc kubenswrapper[4833]: I0219 13:25:44.337995 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" path="/var/lib/kubelet/pods/f31b85a4-7f45-470d-aa45-3ab9f53b0114/volumes" Feb 19 13:25:45 crc kubenswrapper[4833]: I0219 13:25:45.745022 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:25:45 crc kubenswrapper[4833]: I0219 13:25:45.745482 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:25:45 crc kubenswrapper[4833]: I0219 13:25:45.745587 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 13:25:45 crc kubenswrapper[4833]: I0219 13:25:45.746452 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:25:45 crc kubenswrapper[4833]: I0219 13:25:45.746589 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" gracePeriod=600 Feb 19 13:25:45 crc kubenswrapper[4833]: E0219 13:25:45.868874 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:25:46 crc kubenswrapper[4833]: I0219 13:25:46.346031 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" exitCode=0 Feb 19 13:25:46 crc kubenswrapper[4833]: I0219 13:25:46.346072 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799"} Feb 19 13:25:46 crc kubenswrapper[4833]: I0219 13:25:46.346103 4833 scope.go:117] "RemoveContainer" containerID="9fc9b0a249b27484ad54a5d5192986aa294d70a00b6a6cd42401166586473b95" Feb 19 13:25:46 crc kubenswrapper[4833]: I0219 13:25:46.346742 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:25:46 crc kubenswrapper[4833]: E0219 13:25:46.347053 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:25:59 crc kubenswrapper[4833]: I0219 13:25:59.316424 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:25:59 crc kubenswrapper[4833]: E0219 13:25:59.317559 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:26:14 crc kubenswrapper[4833]: I0219 13:26:14.315338 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:26:14 crc kubenswrapper[4833]: E0219 13:26:14.316548 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:26:26 crc kubenswrapper[4833]: I0219 13:26:26.315808 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:26:26 crc kubenswrapper[4833]: E0219 13:26:26.316703 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:26:41 crc kubenswrapper[4833]: I0219 13:26:41.316294 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:26:41 crc kubenswrapper[4833]: E0219 13:26:41.317203 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:26:52 crc kubenswrapper[4833]: I0219 13:26:52.316009 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:26:52 crc kubenswrapper[4833]: E0219 13:26:52.316872 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:27:02 crc kubenswrapper[4833]: I0219 13:27:02.784437 4833 generic.go:334] "Generic (PLEG): container finished" podID="cf0f1512-542b-4358-b74b-57df19d9c7d3" containerID="593344c38688080d8f06f661752fcde4b98b1d646ef9ed2d93eb3f40dac777d0" exitCode=0 Feb 19 13:27:02 crc kubenswrapper[4833]: I0219 13:27:02.784565 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" event={"ID":"cf0f1512-542b-4358-b74b-57df19d9c7d3","Type":"ContainerDied","Data":"593344c38688080d8f06f661752fcde4b98b1d646ef9ed2d93eb3f40dac777d0"} Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.280627 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.345299 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-0\") pod \"cf0f1512-542b-4358-b74b-57df19d9c7d3\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.345543 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-3\") pod \"cf0f1512-542b-4358-b74b-57df19d9c7d3\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.345586 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-inventory\") pod \"cf0f1512-542b-4358-b74b-57df19d9c7d3\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.345616 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-2\") pod \"cf0f1512-542b-4358-b74b-57df19d9c7d3\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.345669 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-ssh-key-openstack-edpm-ipam\") pod \"cf0f1512-542b-4358-b74b-57df19d9c7d3\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.345692 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-1\") pod \"cf0f1512-542b-4358-b74b-57df19d9c7d3\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.345760 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-1\") pod \"cf0f1512-542b-4358-b74b-57df19d9c7d3\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.345787 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-0\") pod \"cf0f1512-542b-4358-b74b-57df19d9c7d3\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.345816 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-extra-config-0\") pod \"cf0f1512-542b-4358-b74b-57df19d9c7d3\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.345876 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl5pt\" (UniqueName: \"kubernetes.io/projected/cf0f1512-542b-4358-b74b-57df19d9c7d3-kube-api-access-zl5pt\") pod \"cf0f1512-542b-4358-b74b-57df19d9c7d3\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.345902 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-combined-ca-bundle\") pod \"cf0f1512-542b-4358-b74b-57df19d9c7d3\" (UID: \"cf0f1512-542b-4358-b74b-57df19d9c7d3\") " Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.371947 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "cf0f1512-542b-4358-b74b-57df19d9c7d3" (UID: "cf0f1512-542b-4358-b74b-57df19d9c7d3"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.381762 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0f1512-542b-4358-b74b-57df19d9c7d3-kube-api-access-zl5pt" (OuterVolumeSpecName: "kube-api-access-zl5pt") pod "cf0f1512-542b-4358-b74b-57df19d9c7d3" (UID: "cf0f1512-542b-4358-b74b-57df19d9c7d3"). InnerVolumeSpecName "kube-api-access-zl5pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.391439 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "cf0f1512-542b-4358-b74b-57df19d9c7d3" (UID: "cf0f1512-542b-4358-b74b-57df19d9c7d3"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.397678 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "cf0f1512-542b-4358-b74b-57df19d9c7d3" (UID: "cf0f1512-542b-4358-b74b-57df19d9c7d3"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.402931 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "cf0f1512-542b-4358-b74b-57df19d9c7d3" (UID: "cf0f1512-542b-4358-b74b-57df19d9c7d3"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.404679 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "cf0f1512-542b-4358-b74b-57df19d9c7d3" (UID: "cf0f1512-542b-4358-b74b-57df19d9c7d3"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.405384 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-inventory" (OuterVolumeSpecName: "inventory") pod "cf0f1512-542b-4358-b74b-57df19d9c7d3" (UID: "cf0f1512-542b-4358-b74b-57df19d9c7d3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.411637 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "cf0f1512-542b-4358-b74b-57df19d9c7d3" (UID: "cf0f1512-542b-4358-b74b-57df19d9c7d3"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.411936 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cf0f1512-542b-4358-b74b-57df19d9c7d3" (UID: "cf0f1512-542b-4358-b74b-57df19d9c7d3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.425006 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "cf0f1512-542b-4358-b74b-57df19d9c7d3" (UID: "cf0f1512-542b-4358-b74b-57df19d9c7d3"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.431309 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "cf0f1512-542b-4358-b74b-57df19d9c7d3" (UID: "cf0f1512-542b-4358-b74b-57df19d9c7d3"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.448855 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.448890 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.448902 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.448916 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.448928 4833 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.448939 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.448949 4833 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.448960 4833 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.448971 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl5pt\" (UniqueName: \"kubernetes.io/projected/cf0f1512-542b-4358-b74b-57df19d9c7d3-kube-api-access-zl5pt\") on node \"crc\" DevicePath \"\"" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.448983 4833 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.448993 4833 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/cf0f1512-542b-4358-b74b-57df19d9c7d3-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.811046 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" event={"ID":"cf0f1512-542b-4358-b74b-57df19d9c7d3","Type":"ContainerDied","Data":"19180bc01c7c80787c2da98ef0dfd0a5d7da1c80c0ec80da138d9e0b9a3e57f8"} Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.811091 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19180bc01c7c80787c2da98ef0dfd0a5d7da1c80c0ec80da138d9e0b9a3e57f8" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.811202 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-rk95c" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.956275 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc"] Feb 19 13:27:04 crc kubenswrapper[4833]: E0219 13:27:04.957107 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" containerName="registry-server" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.957143 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" containerName="registry-server" Feb 19 13:27:04 crc kubenswrapper[4833]: E0219 13:27:04.957182 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" containerName="extract-content" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.957198 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" containerName="extract-content" Feb 19 13:27:04 crc kubenswrapper[4833]: E0219 13:27:04.957228 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" containerName="extract-utilities" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.957242 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" containerName="extract-utilities" Feb 19 13:27:04 crc kubenswrapper[4833]: E0219 13:27:04.957266 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0f1512-542b-4358-b74b-57df19d9c7d3" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.957281 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0f1512-542b-4358-b74b-57df19d9c7d3" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.957708 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0f1512-542b-4358-b74b-57df19d9c7d3" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.957770 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f31b85a4-7f45-470d-aa45-3ab9f53b0114" containerName="registry-server" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.959071 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.965679 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc"] Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.997332 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.997614 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.997655 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.997855 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 19 13:27:04 crc kubenswrapper[4833]: I0219 13:27:04.998137 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-xcxf4" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.064451 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.064590 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.064632 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.064667 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.064933 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcfkh\" (UniqueName: \"kubernetes.io/projected/3d0af35d-1268-4a37-a176-e2ca439c6ba6-kube-api-access-tcfkh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.065019 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.065105 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.167688 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.167782 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.167832 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.167876 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.167968 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcfkh\" (UniqueName: \"kubernetes.io/projected/3d0af35d-1268-4a37-a176-e2ca439c6ba6-kube-api-access-tcfkh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.168083 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.168891 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.173443 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.174902 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.175020 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.176248 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.177740 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.182434 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.190371 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcfkh\" (UniqueName: \"kubernetes.io/projected/3d0af35d-1268-4a37-a176-e2ca439c6ba6-kube-api-access-tcfkh\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.310789 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.869911 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc"] Feb 19 13:27:05 crc kubenswrapper[4833]: I0219 13:27:05.877128 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:27:06 crc kubenswrapper[4833]: I0219 13:27:06.315313 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:27:06 crc kubenswrapper[4833]: E0219 13:27:06.316078 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:27:06 crc kubenswrapper[4833]: I0219 13:27:06.835529 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" event={"ID":"3d0af35d-1268-4a37-a176-e2ca439c6ba6","Type":"ContainerStarted","Data":"5c50d775a4560718e21acdef41145e92774e19a925f2819e3530b658c93f2025"} Feb 19 13:27:06 crc kubenswrapper[4833]: I0219 13:27:06.835602 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" event={"ID":"3d0af35d-1268-4a37-a176-e2ca439c6ba6","Type":"ContainerStarted","Data":"278d9d0e0a340f69ada9af89eec1287b1221a4ff3e179ee50423a42a7c4c5e28"} Feb 19 13:27:06 crc kubenswrapper[4833]: I0219 13:27:06.859378 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" podStartSLOduration=2.439194924 podStartE2EDuration="2.859363132s" podCreationTimestamp="2026-02-19 13:27:04 +0000 UTC" firstStartedPulling="2026-02-19 13:27:05.876857291 +0000 UTC m=+2436.272376069" lastFinishedPulling="2026-02-19 13:27:06.297025479 +0000 UTC m=+2436.692544277" observedRunningTime="2026-02-19 13:27:06.854317129 +0000 UTC m=+2437.249835897" watchObservedRunningTime="2026-02-19 13:27:06.859363132 +0000 UTC m=+2437.254881900" Feb 19 13:27:17 crc kubenswrapper[4833]: I0219 13:27:17.314936 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:27:17 crc kubenswrapper[4833]: E0219 13:27:17.316009 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:27:28 crc kubenswrapper[4833]: I0219 13:27:28.315235 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:27:28 crc kubenswrapper[4833]: E0219 13:27:28.316526 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:27:41 crc kubenswrapper[4833]: I0219 13:27:41.314672 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:27:41 crc kubenswrapper[4833]: E0219 13:27:41.315615 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:27:53 crc kubenswrapper[4833]: I0219 13:27:53.315752 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:27:53 crc kubenswrapper[4833]: E0219 13:27:53.317135 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:28:07 crc kubenswrapper[4833]: I0219 13:28:07.315623 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:28:07 crc kubenswrapper[4833]: E0219 13:28:07.317043 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:28:19 crc kubenswrapper[4833]: I0219 13:28:19.316182 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:28:19 crc kubenswrapper[4833]: E0219 13:28:19.317102 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:28:31 crc kubenswrapper[4833]: I0219 13:28:31.315249 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:28:31 crc kubenswrapper[4833]: E0219 13:28:31.316030 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:28:42 crc kubenswrapper[4833]: I0219 13:28:42.315295 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:28:42 crc kubenswrapper[4833]: E0219 13:28:42.317171 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:28:56 crc kubenswrapper[4833]: I0219 13:28:56.315324 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:28:56 crc kubenswrapper[4833]: E0219 13:28:56.316383 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:29:09 crc kubenswrapper[4833]: I0219 13:29:09.315122 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:29:09 crc kubenswrapper[4833]: E0219 13:29:09.316198 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:29:20 crc kubenswrapper[4833]: I0219 13:29:20.328706 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:29:20 crc kubenswrapper[4833]: E0219 13:29:20.329657 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:29:34 crc kubenswrapper[4833]: I0219 13:29:34.315919 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:29:34 crc kubenswrapper[4833]: E0219 13:29:34.317443 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:29:38 crc kubenswrapper[4833]: I0219 13:29:38.429912 4833 generic.go:334] "Generic (PLEG): container finished" podID="3d0af35d-1268-4a37-a176-e2ca439c6ba6" containerID="5c50d775a4560718e21acdef41145e92774e19a925f2819e3530b658c93f2025" exitCode=0 Feb 19 13:29:38 crc kubenswrapper[4833]: I0219 13:29:38.429984 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" event={"ID":"3d0af35d-1268-4a37-a176-e2ca439c6ba6","Type":"ContainerDied","Data":"5c50d775a4560718e21acdef41145e92774e19a925f2819e3530b658c93f2025"} Feb 19 13:29:39 crc kubenswrapper[4833]: I0219 13:29:39.913183 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:29:39 crc kubenswrapper[4833]: I0219 13:29:39.987189 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-telemetry-combined-ca-bundle\") pod \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " Feb 19 13:29:39 crc kubenswrapper[4833]: I0219 13:29:39.987267 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ssh-key-openstack-edpm-ipam\") pod \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " Feb 19 13:29:39 crc kubenswrapper[4833]: I0219 13:29:39.987337 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-1\") pod \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " Feb 19 13:29:39 crc kubenswrapper[4833]: I0219 13:29:39.987400 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-0\") pod \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " Feb 19 13:29:39 crc kubenswrapper[4833]: I0219 13:29:39.988217 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-inventory\") pod \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " Feb 19 13:29:39 crc kubenswrapper[4833]: I0219 13:29:39.988286 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcfkh\" (UniqueName: \"kubernetes.io/projected/3d0af35d-1268-4a37-a176-e2ca439c6ba6-kube-api-access-tcfkh\") pod \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " Feb 19 13:29:39 crc kubenswrapper[4833]: I0219 13:29:39.988466 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-2\") pod \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\" (UID: \"3d0af35d-1268-4a37-a176-e2ca439c6ba6\") " Feb 19 13:29:39 crc kubenswrapper[4833]: I0219 13:29:39.994359 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d0af35d-1268-4a37-a176-e2ca439c6ba6-kube-api-access-tcfkh" (OuterVolumeSpecName: "kube-api-access-tcfkh") pod "3d0af35d-1268-4a37-a176-e2ca439c6ba6" (UID: "3d0af35d-1268-4a37-a176-e2ca439c6ba6"). InnerVolumeSpecName "kube-api-access-tcfkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:29:39 crc kubenswrapper[4833]: I0219 13:29:39.996104 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "3d0af35d-1268-4a37-a176-e2ca439c6ba6" (UID: "3d0af35d-1268-4a37-a176-e2ca439c6ba6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.022418 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-inventory" (OuterVolumeSpecName: "inventory") pod "3d0af35d-1268-4a37-a176-e2ca439c6ba6" (UID: "3d0af35d-1268-4a37-a176-e2ca439c6ba6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.025066 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "3d0af35d-1268-4a37-a176-e2ca439c6ba6" (UID: "3d0af35d-1268-4a37-a176-e2ca439c6ba6"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.025846 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3d0af35d-1268-4a37-a176-e2ca439c6ba6" (UID: "3d0af35d-1268-4a37-a176-e2ca439c6ba6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.044588 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "3d0af35d-1268-4a37-a176-e2ca439c6ba6" (UID: "3d0af35d-1268-4a37-a176-e2ca439c6ba6"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.047562 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "3d0af35d-1268-4a37-a176-e2ca439c6ba6" (UID: "3d0af35d-1268-4a37-a176-e2ca439c6ba6"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.090201 4833 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.090235 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.090244 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.090253 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.090265 4833 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-inventory\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.090274 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcfkh\" (UniqueName: \"kubernetes.io/projected/3d0af35d-1268-4a37-a176-e2ca439c6ba6-kube-api-access-tcfkh\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.090284 4833 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/3d0af35d-1268-4a37-a176-e2ca439c6ba6-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.455977 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" event={"ID":"3d0af35d-1268-4a37-a176-e2ca439c6ba6","Type":"ContainerDied","Data":"278d9d0e0a340f69ada9af89eec1287b1221a4ff3e179ee50423a42a7c4c5e28"} Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.456376 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="278d9d0e0a340f69ada9af89eec1287b1221a4ff3e179ee50423a42a7c4c5e28" Feb 19 13:29:40 crc kubenswrapper[4833]: I0219 13:29:40.456072 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc" Feb 19 13:29:46 crc kubenswrapper[4833]: I0219 13:29:46.315374 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:29:46 crc kubenswrapper[4833]: E0219 13:29:46.316511 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:29:59 crc kubenswrapper[4833]: I0219 13:29:59.316065 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:29:59 crc kubenswrapper[4833]: E0219 13:29:59.317174 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.176176 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg"] Feb 19 13:30:00 crc kubenswrapper[4833]: E0219 13:30:00.177043 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d0af35d-1268-4a37-a176-e2ca439c6ba6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.177072 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d0af35d-1268-4a37-a176-e2ca439c6ba6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.177409 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d0af35d-1268-4a37-a176-e2ca439c6ba6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.178358 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.180902 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.181128 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.188806 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg"] Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.361680 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11de7f40-f179-42e9-974f-f5c2f5e36c4e-secret-volume\") pod \"collect-profiles-29525130-b9bkg\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.361874 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11de7f40-f179-42e9-974f-f5c2f5e36c4e-config-volume\") pod \"collect-profiles-29525130-b9bkg\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.361977 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv9dd\" (UniqueName: \"kubernetes.io/projected/11de7f40-f179-42e9-974f-f5c2f5e36c4e-kube-api-access-bv9dd\") pod \"collect-profiles-29525130-b9bkg\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.463917 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11de7f40-f179-42e9-974f-f5c2f5e36c4e-config-volume\") pod \"collect-profiles-29525130-b9bkg\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.464237 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv9dd\" (UniqueName: \"kubernetes.io/projected/11de7f40-f179-42e9-974f-f5c2f5e36c4e-kube-api-access-bv9dd\") pod \"collect-profiles-29525130-b9bkg\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.464548 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11de7f40-f179-42e9-974f-f5c2f5e36c4e-secret-volume\") pod \"collect-profiles-29525130-b9bkg\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.464728 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11de7f40-f179-42e9-974f-f5c2f5e36c4e-config-volume\") pod \"collect-profiles-29525130-b9bkg\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.475296 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11de7f40-f179-42e9-974f-f5c2f5e36c4e-secret-volume\") pod \"collect-profiles-29525130-b9bkg\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.484027 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv9dd\" (UniqueName: \"kubernetes.io/projected/11de7f40-f179-42e9-974f-f5c2f5e36c4e-kube-api-access-bv9dd\") pod \"collect-profiles-29525130-b9bkg\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.512954 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:00 crc kubenswrapper[4833]: I0219 13:30:00.779820 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg"] Feb 19 13:30:01 crc kubenswrapper[4833]: I0219 13:30:01.687011 4833 generic.go:334] "Generic (PLEG): container finished" podID="11de7f40-f179-42e9-974f-f5c2f5e36c4e" containerID="b6dff61c78656b15addcd4b6bb0df13d94eb52cdbf5d62ff92be0eb0e0f1b43f" exitCode=0 Feb 19 13:30:01 crc kubenswrapper[4833]: I0219 13:30:01.687082 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" event={"ID":"11de7f40-f179-42e9-974f-f5c2f5e36c4e","Type":"ContainerDied","Data":"b6dff61c78656b15addcd4b6bb0df13d94eb52cdbf5d62ff92be0eb0e0f1b43f"} Feb 19 13:30:01 crc kubenswrapper[4833]: I0219 13:30:01.687694 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" event={"ID":"11de7f40-f179-42e9-974f-f5c2f5e36c4e","Type":"ContainerStarted","Data":"e501bb2de4639d83de304a7f0ab884e8dd5d71076293c6d839cee58ef99d1e98"} Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.105639 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.222190 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11de7f40-f179-42e9-974f-f5c2f5e36c4e-secret-volume\") pod \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.222558 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11de7f40-f179-42e9-974f-f5c2f5e36c4e-config-volume\") pod \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.222802 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv9dd\" (UniqueName: \"kubernetes.io/projected/11de7f40-f179-42e9-974f-f5c2f5e36c4e-kube-api-access-bv9dd\") pod \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\" (UID: \"11de7f40-f179-42e9-974f-f5c2f5e36c4e\") " Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.223357 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11de7f40-f179-42e9-974f-f5c2f5e36c4e-config-volume" (OuterVolumeSpecName: "config-volume") pod "11de7f40-f179-42e9-974f-f5c2f5e36c4e" (UID: "11de7f40-f179-42e9-974f-f5c2f5e36c4e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.229565 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11de7f40-f179-42e9-974f-f5c2f5e36c4e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "11de7f40-f179-42e9-974f-f5c2f5e36c4e" (UID: "11de7f40-f179-42e9-974f-f5c2f5e36c4e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.229930 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11de7f40-f179-42e9-974f-f5c2f5e36c4e-kube-api-access-bv9dd" (OuterVolumeSpecName: "kube-api-access-bv9dd") pod "11de7f40-f179-42e9-974f-f5c2f5e36c4e" (UID: "11de7f40-f179-42e9-974f-f5c2f5e36c4e"). InnerVolumeSpecName "kube-api-access-bv9dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.325214 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv9dd\" (UniqueName: \"kubernetes.io/projected/11de7f40-f179-42e9-974f-f5c2f5e36c4e-kube-api-access-bv9dd\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.325505 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11de7f40-f179-42e9-974f-f5c2f5e36c4e-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.325519 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11de7f40-f179-42e9-974f-f5c2f5e36c4e-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.715531 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" event={"ID":"11de7f40-f179-42e9-974f-f5c2f5e36c4e","Type":"ContainerDied","Data":"e501bb2de4639d83de304a7f0ab884e8dd5d71076293c6d839cee58ef99d1e98"} Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.715578 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525130-b9bkg" Feb 19 13:30:03 crc kubenswrapper[4833]: I0219 13:30:03.715602 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e501bb2de4639d83de304a7f0ab884e8dd5d71076293c6d839cee58ef99d1e98" Feb 19 13:30:04 crc kubenswrapper[4833]: I0219 13:30:04.190684 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp"] Feb 19 13:30:04 crc kubenswrapper[4833]: I0219 13:30:04.198450 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525085-npsvp"] Feb 19 13:30:04 crc kubenswrapper[4833]: I0219 13:30:04.338231 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f436432a-f92b-4b2a-89a9-8014f487dc12" path="/var/lib/kubelet/pods/f436432a-f92b-4b2a-89a9-8014f487dc12/volumes" Feb 19 13:30:13 crc kubenswrapper[4833]: I0219 13:30:13.314535 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:30:13 crc kubenswrapper[4833]: E0219 13:30:13.315343 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:30:27 crc kubenswrapper[4833]: I0219 13:30:27.315019 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:30:27 crc kubenswrapper[4833]: E0219 13:30:27.316454 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.606227 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 13:30:35 crc kubenswrapper[4833]: E0219 13:30:35.607356 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11de7f40-f179-42e9-974f-f5c2f5e36c4e" containerName="collect-profiles" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.607374 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="11de7f40-f179-42e9-974f-f5c2f5e36c4e" containerName="collect-profiles" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.607793 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="11de7f40-f179-42e9-974f-f5c2f5e36c4e" containerName="collect-profiles" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.610365 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.614973 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.615735 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-4g9nf" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.616068 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.617169 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.634602 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.704671 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.704753 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.704790 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-config-data\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.704876 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpkqb\" (UniqueName: \"kubernetes.io/projected/fbca1583-1d12-4e49-bda3-864536093e85-kube-api-access-bpkqb\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.705123 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.705206 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.705245 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.705309 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.705532 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.808428 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-config-data\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.808651 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpkqb\" (UniqueName: \"kubernetes.io/projected/fbca1583-1d12-4e49-bda3-864536093e85-kube-api-access-bpkqb\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.808770 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.808826 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.808880 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.808954 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.809119 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.809356 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.809423 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.810668 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.810939 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.811062 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.811066 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.812010 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-config-data\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.816927 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.818302 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.821258 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.836053 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpkqb\" (UniqueName: \"kubernetes.io/projected/fbca1583-1d12-4e49-bda3-864536093e85-kube-api-access-bpkqb\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.861955 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " pod="openstack/tempest-tests-tempest" Feb 19 13:30:35 crc kubenswrapper[4833]: I0219 13:30:35.952659 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 13:30:36 crc kubenswrapper[4833]: I0219 13:30:36.465159 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 19 13:30:37 crc kubenswrapper[4833]: I0219 13:30:37.086882 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fbca1583-1d12-4e49-bda3-864536093e85","Type":"ContainerStarted","Data":"04730a653ad9866d446b2234606538024e1090eb0cb22c93902f0b0383dac4de"} Feb 19 13:30:38 crc kubenswrapper[4833]: I0219 13:30:38.315077 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:30:38 crc kubenswrapper[4833]: E0219 13:30:38.315354 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:30:49 crc kubenswrapper[4833]: I0219 13:30:49.896147 4833 scope.go:117] "RemoveContainer" containerID="b07ed14e35ba092e38f1f9d40089d8df4020b768a3e9fe5355fb6e57f7423d66" Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.338303 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l6rkg"] Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.340989 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.364306 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l6rkg"] Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.463100 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-utilities\") pod \"community-operators-l6rkg\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.463603 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-catalog-content\") pod \"community-operators-l6rkg\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.463837 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9xls\" (UniqueName: \"kubernetes.io/projected/b69797cd-7003-461c-b61b-80731a421cb4-kube-api-access-q9xls\") pod \"community-operators-l6rkg\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.566097 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-utilities\") pod \"community-operators-l6rkg\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.566168 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-catalog-content\") pod \"community-operators-l6rkg\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.566281 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9xls\" (UniqueName: \"kubernetes.io/projected/b69797cd-7003-461c-b61b-80731a421cb4-kube-api-access-q9xls\") pod \"community-operators-l6rkg\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.566888 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-utilities\") pod \"community-operators-l6rkg\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.567085 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-catalog-content\") pod \"community-operators-l6rkg\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.603371 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9xls\" (UniqueName: \"kubernetes.io/projected/b69797cd-7003-461c-b61b-80731a421cb4-kube-api-access-q9xls\") pod \"community-operators-l6rkg\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:30:52 crc kubenswrapper[4833]: I0219 13:30:52.665595 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:30:53 crc kubenswrapper[4833]: I0219 13:30:53.314849 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:31:07 crc kubenswrapper[4833]: E0219 13:31:07.868303 4833 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 19 13:31:07 crc kubenswrapper[4833]: E0219 13:31:07.869751 4833 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpkqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(fbca1583-1d12-4e49-bda3-864536093e85): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 19 13:31:07 crc kubenswrapper[4833]: E0219 13:31:07.871036 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="fbca1583-1d12-4e49-bda3-864536093e85" Feb 19 13:31:08 crc kubenswrapper[4833]: I0219 13:31:08.337200 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l6rkg"] Feb 19 13:31:08 crc kubenswrapper[4833]: I0219 13:31:08.428585 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"8ab6a38a220e1cc1992d0be3d5da374fe107c85f0ef6131e668a438d5db15f13"} Feb 19 13:31:08 crc kubenswrapper[4833]: I0219 13:31:08.434962 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6rkg" event={"ID":"b69797cd-7003-461c-b61b-80731a421cb4","Type":"ContainerStarted","Data":"5c95613e09d8fe6a1534c5144e180f9a8273a89b93c74d879b993d2e4c0caa08"} Feb 19 13:31:08 crc kubenswrapper[4833]: E0219 13:31:08.436564 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="fbca1583-1d12-4e49-bda3-864536093e85" Feb 19 13:31:09 crc kubenswrapper[4833]: I0219 13:31:09.447603 4833 generic.go:334] "Generic (PLEG): container finished" podID="b69797cd-7003-461c-b61b-80731a421cb4" containerID="090232dadc586ac4052747da5d28926b5bd3fc35efca4bf39b5d9bfc27ab65fe" exitCode=0 Feb 19 13:31:09 crc kubenswrapper[4833]: I0219 13:31:09.447897 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6rkg" event={"ID":"b69797cd-7003-461c-b61b-80731a421cb4","Type":"ContainerDied","Data":"090232dadc586ac4052747da5d28926b5bd3fc35efca4bf39b5d9bfc27ab65fe"} Feb 19 13:31:11 crc kubenswrapper[4833]: I0219 13:31:11.479819 4833 generic.go:334] "Generic (PLEG): container finished" podID="b69797cd-7003-461c-b61b-80731a421cb4" containerID="db6db98ac2374a667ba5f109b7ac716ab1959fe681f34e141090fc588ea527c8" exitCode=0 Feb 19 13:31:11 crc kubenswrapper[4833]: I0219 13:31:11.480633 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6rkg" event={"ID":"b69797cd-7003-461c-b61b-80731a421cb4","Type":"ContainerDied","Data":"db6db98ac2374a667ba5f109b7ac716ab1959fe681f34e141090fc588ea527c8"} Feb 19 13:31:12 crc kubenswrapper[4833]: I0219 13:31:12.494417 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6rkg" event={"ID":"b69797cd-7003-461c-b61b-80731a421cb4","Type":"ContainerStarted","Data":"dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4"} Feb 19 13:31:12 crc kubenswrapper[4833]: I0219 13:31:12.526872 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l6rkg" podStartSLOduration=18.038008195 podStartE2EDuration="20.526856004s" podCreationTimestamp="2026-02-19 13:30:52 +0000 UTC" firstStartedPulling="2026-02-19 13:31:09.451165491 +0000 UTC m=+2679.846684299" lastFinishedPulling="2026-02-19 13:31:11.94001332 +0000 UTC m=+2682.335532108" observedRunningTime="2026-02-19 13:31:12.522432657 +0000 UTC m=+2682.917951455" watchObservedRunningTime="2026-02-19 13:31:12.526856004 +0000 UTC m=+2682.922374772" Feb 19 13:31:12 crc kubenswrapper[4833]: I0219 13:31:12.666735 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:31:12 crc kubenswrapper[4833]: I0219 13:31:12.666785 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:31:13 crc kubenswrapper[4833]: I0219 13:31:13.733657 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-l6rkg" podUID="b69797cd-7003-461c-b61b-80731a421cb4" containerName="registry-server" probeResult="failure" output=< Feb 19 13:31:13 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 19 13:31:13 crc kubenswrapper[4833]: > Feb 19 13:31:22 crc kubenswrapper[4833]: I0219 13:31:22.606080 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fbca1583-1d12-4e49-bda3-864536093e85","Type":"ContainerStarted","Data":"f6e272ab2c214823f3e1379dc3adac62d28bfa2e27495fe7e695ad28bc055358"} Feb 19 13:31:22 crc kubenswrapper[4833]: I0219 13:31:22.634625 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.297541246 podStartE2EDuration="48.634603005s" podCreationTimestamp="2026-02-19 13:30:34 +0000 UTC" firstStartedPulling="2026-02-19 13:30:36.480268853 +0000 UTC m=+2646.875787631" lastFinishedPulling="2026-02-19 13:31:20.817330582 +0000 UTC m=+2691.212849390" observedRunningTime="2026-02-19 13:31:22.62613216 +0000 UTC m=+2693.021650988" watchObservedRunningTime="2026-02-19 13:31:22.634603005 +0000 UTC m=+2693.030121783" Feb 19 13:31:22 crc kubenswrapper[4833]: I0219 13:31:22.743675 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:31:22 crc kubenswrapper[4833]: I0219 13:31:22.805568 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:31:23 crc kubenswrapper[4833]: I0219 13:31:23.531333 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l6rkg"] Feb 19 13:31:24 crc kubenswrapper[4833]: I0219 13:31:24.626007 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l6rkg" podUID="b69797cd-7003-461c-b61b-80731a421cb4" containerName="registry-server" containerID="cri-o://dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4" gracePeriod=2 Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.195697 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.229538 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9xls\" (UniqueName: \"kubernetes.io/projected/b69797cd-7003-461c-b61b-80731a421cb4-kube-api-access-q9xls\") pod \"b69797cd-7003-461c-b61b-80731a421cb4\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.229688 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-catalog-content\") pod \"b69797cd-7003-461c-b61b-80731a421cb4\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.229802 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-utilities\") pod \"b69797cd-7003-461c-b61b-80731a421cb4\" (UID: \"b69797cd-7003-461c-b61b-80731a421cb4\") " Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.231283 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-utilities" (OuterVolumeSpecName: "utilities") pod "b69797cd-7003-461c-b61b-80731a421cb4" (UID: "b69797cd-7003-461c-b61b-80731a421cb4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.235282 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b69797cd-7003-461c-b61b-80731a421cb4-kube-api-access-q9xls" (OuterVolumeSpecName: "kube-api-access-q9xls") pod "b69797cd-7003-461c-b61b-80731a421cb4" (UID: "b69797cd-7003-461c-b61b-80731a421cb4"). InnerVolumeSpecName "kube-api-access-q9xls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.280631 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b69797cd-7003-461c-b61b-80731a421cb4" (UID: "b69797cd-7003-461c-b61b-80731a421cb4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.331712 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9xls\" (UniqueName: \"kubernetes.io/projected/b69797cd-7003-461c-b61b-80731a421cb4-kube-api-access-q9xls\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.331741 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.331750 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b69797cd-7003-461c-b61b-80731a421cb4-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.642668 4833 generic.go:334] "Generic (PLEG): container finished" podID="b69797cd-7003-461c-b61b-80731a421cb4" containerID="dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4" exitCode=0 Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.642747 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6rkg" event={"ID":"b69797cd-7003-461c-b61b-80731a421cb4","Type":"ContainerDied","Data":"dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4"} Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.642793 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l6rkg" event={"ID":"b69797cd-7003-461c-b61b-80731a421cb4","Type":"ContainerDied","Data":"5c95613e09d8fe6a1534c5144e180f9a8273a89b93c74d879b993d2e4c0caa08"} Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.642826 4833 scope.go:117] "RemoveContainer" containerID="dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.643031 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l6rkg" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.676085 4833 scope.go:117] "RemoveContainer" containerID="db6db98ac2374a667ba5f109b7ac716ab1959fe681f34e141090fc588ea527c8" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.711662 4833 scope.go:117] "RemoveContainer" containerID="090232dadc586ac4052747da5d28926b5bd3fc35efca4bf39b5d9bfc27ab65fe" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.712045 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l6rkg"] Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.726710 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l6rkg"] Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.754251 4833 scope.go:117] "RemoveContainer" containerID="dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4" Feb 19 13:31:25 crc kubenswrapper[4833]: E0219 13:31:25.755153 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4\": container with ID starting with dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4 not found: ID does not exist" containerID="dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.755214 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4"} err="failed to get container status \"dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4\": rpc error: code = NotFound desc = could not find container \"dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4\": container with ID starting with dfd55e860666d5f28adbd5d3d3a3f744fdbf29c4ddea4699cab042500d2b2db4 not found: ID does not exist" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.755256 4833 scope.go:117] "RemoveContainer" containerID="db6db98ac2374a667ba5f109b7ac716ab1959fe681f34e141090fc588ea527c8" Feb 19 13:31:25 crc kubenswrapper[4833]: E0219 13:31:25.756420 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db6db98ac2374a667ba5f109b7ac716ab1959fe681f34e141090fc588ea527c8\": container with ID starting with db6db98ac2374a667ba5f109b7ac716ab1959fe681f34e141090fc588ea527c8 not found: ID does not exist" containerID="db6db98ac2374a667ba5f109b7ac716ab1959fe681f34e141090fc588ea527c8" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.756462 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db6db98ac2374a667ba5f109b7ac716ab1959fe681f34e141090fc588ea527c8"} err="failed to get container status \"db6db98ac2374a667ba5f109b7ac716ab1959fe681f34e141090fc588ea527c8\": rpc error: code = NotFound desc = could not find container \"db6db98ac2374a667ba5f109b7ac716ab1959fe681f34e141090fc588ea527c8\": container with ID starting with db6db98ac2374a667ba5f109b7ac716ab1959fe681f34e141090fc588ea527c8 not found: ID does not exist" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.756484 4833 scope.go:117] "RemoveContainer" containerID="090232dadc586ac4052747da5d28926b5bd3fc35efca4bf39b5d9bfc27ab65fe" Feb 19 13:31:25 crc kubenswrapper[4833]: E0219 13:31:25.757054 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"090232dadc586ac4052747da5d28926b5bd3fc35efca4bf39b5d9bfc27ab65fe\": container with ID starting with 090232dadc586ac4052747da5d28926b5bd3fc35efca4bf39b5d9bfc27ab65fe not found: ID does not exist" containerID="090232dadc586ac4052747da5d28926b5bd3fc35efca4bf39b5d9bfc27ab65fe" Feb 19 13:31:25 crc kubenswrapper[4833]: I0219 13:31:25.757099 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"090232dadc586ac4052747da5d28926b5bd3fc35efca4bf39b5d9bfc27ab65fe"} err="failed to get container status \"090232dadc586ac4052747da5d28926b5bd3fc35efca4bf39b5d9bfc27ab65fe\": rpc error: code = NotFound desc = could not find container \"090232dadc586ac4052747da5d28926b5bd3fc35efca4bf39b5d9bfc27ab65fe\": container with ID starting with 090232dadc586ac4052747da5d28926b5bd3fc35efca4bf39b5d9bfc27ab65fe not found: ID does not exist" Feb 19 13:31:26 crc kubenswrapper[4833]: I0219 13:31:26.326762 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b69797cd-7003-461c-b61b-80731a421cb4" path="/var/lib/kubelet/pods/b69797cd-7003-461c-b61b-80731a421cb4/volumes" Feb 19 13:33:15 crc kubenswrapper[4833]: I0219 13:33:15.744487 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:33:15 crc kubenswrapper[4833]: I0219 13:33:15.745047 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:33:45 crc kubenswrapper[4833]: I0219 13:33:45.745003 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:33:45 crc kubenswrapper[4833]: I0219 13:33:45.745748 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:34:15 crc kubenswrapper[4833]: I0219 13:34:15.744472 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:34:15 crc kubenswrapper[4833]: I0219 13:34:15.744928 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:34:15 crc kubenswrapper[4833]: I0219 13:34:15.744978 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 13:34:15 crc kubenswrapper[4833]: I0219 13:34:15.745782 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ab6a38a220e1cc1992d0be3d5da374fe107c85f0ef6131e668a438d5db15f13"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:34:15 crc kubenswrapper[4833]: I0219 13:34:15.745833 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://8ab6a38a220e1cc1992d0be3d5da374fe107c85f0ef6131e668a438d5db15f13" gracePeriod=600 Feb 19 13:34:16 crc kubenswrapper[4833]: I0219 13:34:16.455921 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="8ab6a38a220e1cc1992d0be3d5da374fe107c85f0ef6131e668a438d5db15f13" exitCode=0 Feb 19 13:34:16 crc kubenswrapper[4833]: I0219 13:34:16.456001 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"8ab6a38a220e1cc1992d0be3d5da374fe107c85f0ef6131e668a438d5db15f13"} Feb 19 13:34:16 crc kubenswrapper[4833]: I0219 13:34:16.456824 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a"} Feb 19 13:34:16 crc kubenswrapper[4833]: I0219 13:34:16.456864 4833 scope.go:117] "RemoveContainer" containerID="ed5ecec62c562c165cc00b7a7ac89f1c7d1f01704fb1a133db6f95b6faf23799" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.448805 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-blf75"] Feb 19 13:34:53 crc kubenswrapper[4833]: E0219 13:34:53.450692 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b69797cd-7003-461c-b61b-80731a421cb4" containerName="registry-server" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.450729 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b69797cd-7003-461c-b61b-80731a421cb4" containerName="registry-server" Feb 19 13:34:53 crc kubenswrapper[4833]: E0219 13:34:53.450800 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b69797cd-7003-461c-b61b-80731a421cb4" containerName="extract-utilities" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.450820 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b69797cd-7003-461c-b61b-80731a421cb4" containerName="extract-utilities" Feb 19 13:34:53 crc kubenswrapper[4833]: E0219 13:34:53.450857 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b69797cd-7003-461c-b61b-80731a421cb4" containerName="extract-content" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.450879 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="b69797cd-7003-461c-b61b-80731a421cb4" containerName="extract-content" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.451341 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="b69797cd-7003-461c-b61b-80731a421cb4" containerName="registry-server" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.456553 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.462515 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-blf75"] Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.561079 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw5rh\" (UniqueName: \"kubernetes.io/projected/3c887b0d-3eeb-439d-b063-434410ef1337-kube-api-access-fw5rh\") pod \"certified-operators-blf75\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.561213 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-utilities\") pod \"certified-operators-blf75\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.561299 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-catalog-content\") pod \"certified-operators-blf75\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.662887 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-catalog-content\") pod \"certified-operators-blf75\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.663001 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw5rh\" (UniqueName: \"kubernetes.io/projected/3c887b0d-3eeb-439d-b063-434410ef1337-kube-api-access-fw5rh\") pod \"certified-operators-blf75\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.663103 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-utilities\") pod \"certified-operators-blf75\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.663406 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-catalog-content\") pod \"certified-operators-blf75\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.663705 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-utilities\") pod \"certified-operators-blf75\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.685428 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw5rh\" (UniqueName: \"kubernetes.io/projected/3c887b0d-3eeb-439d-b063-434410ef1337-kube-api-access-fw5rh\") pod \"certified-operators-blf75\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:34:53 crc kubenswrapper[4833]: I0219 13:34:53.793224 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:34:54 crc kubenswrapper[4833]: I0219 13:34:54.280424 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-blf75"] Feb 19 13:34:54 crc kubenswrapper[4833]: I0219 13:34:54.880252 4833 generic.go:334] "Generic (PLEG): container finished" podID="3c887b0d-3eeb-439d-b063-434410ef1337" containerID="cd0a5bed93b24b75638d9dfcaf93571d1040a469f1b456e7cd74df87c1bf9c32" exitCode=0 Feb 19 13:34:54 crc kubenswrapper[4833]: I0219 13:34:54.880396 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blf75" event={"ID":"3c887b0d-3eeb-439d-b063-434410ef1337","Type":"ContainerDied","Data":"cd0a5bed93b24b75638d9dfcaf93571d1040a469f1b456e7cd74df87c1bf9c32"} Feb 19 13:34:54 crc kubenswrapper[4833]: I0219 13:34:54.880806 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blf75" event={"ID":"3c887b0d-3eeb-439d-b063-434410ef1337","Type":"ContainerStarted","Data":"c38fc616146c1717ea077cce7ce072489a784c7843daa0c4100367fa94e6b3d1"} Feb 19 13:34:54 crc kubenswrapper[4833]: I0219 13:34:54.884634 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:34:55 crc kubenswrapper[4833]: I0219 13:34:55.889583 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blf75" event={"ID":"3c887b0d-3eeb-439d-b063-434410ef1337","Type":"ContainerStarted","Data":"8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5"} Feb 19 13:34:56 crc kubenswrapper[4833]: I0219 13:34:56.904809 4833 generic.go:334] "Generic (PLEG): container finished" podID="3c887b0d-3eeb-439d-b063-434410ef1337" containerID="8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5" exitCode=0 Feb 19 13:34:56 crc kubenswrapper[4833]: I0219 13:34:56.904929 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blf75" event={"ID":"3c887b0d-3eeb-439d-b063-434410ef1337","Type":"ContainerDied","Data":"8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5"} Feb 19 13:34:57 crc kubenswrapper[4833]: I0219 13:34:57.916304 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blf75" event={"ID":"3c887b0d-3eeb-439d-b063-434410ef1337","Type":"ContainerStarted","Data":"cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041"} Feb 19 13:34:57 crc kubenswrapper[4833]: I0219 13:34:57.943175 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-blf75" podStartSLOduration=2.483361477 podStartE2EDuration="4.943150776s" podCreationTimestamp="2026-02-19 13:34:53 +0000 UTC" firstStartedPulling="2026-02-19 13:34:54.884236234 +0000 UTC m=+2905.279755012" lastFinishedPulling="2026-02-19 13:34:57.344025543 +0000 UTC m=+2907.739544311" observedRunningTime="2026-02-19 13:34:57.932804841 +0000 UTC m=+2908.328323609" watchObservedRunningTime="2026-02-19 13:34:57.943150776 +0000 UTC m=+2908.338669544" Feb 19 13:35:03 crc kubenswrapper[4833]: I0219 13:35:03.793679 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:35:03 crc kubenswrapper[4833]: I0219 13:35:03.794678 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:35:03 crc kubenswrapper[4833]: I0219 13:35:03.849310 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:35:04 crc kubenswrapper[4833]: I0219 13:35:04.018163 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:35:04 crc kubenswrapper[4833]: I0219 13:35:04.103571 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-blf75"] Feb 19 13:35:06 crc kubenswrapper[4833]: I0219 13:35:06.009646 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-blf75" podUID="3c887b0d-3eeb-439d-b063-434410ef1337" containerName="registry-server" containerID="cri-o://cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041" gracePeriod=2 Feb 19 13:35:06 crc kubenswrapper[4833]: I0219 13:35:06.524741 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:35:06 crc kubenswrapper[4833]: I0219 13:35:06.562294 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw5rh\" (UniqueName: \"kubernetes.io/projected/3c887b0d-3eeb-439d-b063-434410ef1337-kube-api-access-fw5rh\") pod \"3c887b0d-3eeb-439d-b063-434410ef1337\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " Feb 19 13:35:06 crc kubenswrapper[4833]: I0219 13:35:06.562346 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-utilities\") pod \"3c887b0d-3eeb-439d-b063-434410ef1337\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " Feb 19 13:35:06 crc kubenswrapper[4833]: I0219 13:35:06.562370 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-catalog-content\") pod \"3c887b0d-3eeb-439d-b063-434410ef1337\" (UID: \"3c887b0d-3eeb-439d-b063-434410ef1337\") " Feb 19 13:35:06 crc kubenswrapper[4833]: I0219 13:35:06.564948 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-utilities" (OuterVolumeSpecName: "utilities") pod "3c887b0d-3eeb-439d-b063-434410ef1337" (UID: "3c887b0d-3eeb-439d-b063-434410ef1337"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:35:06 crc kubenswrapper[4833]: I0219 13:35:06.572923 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c887b0d-3eeb-439d-b063-434410ef1337-kube-api-access-fw5rh" (OuterVolumeSpecName: "kube-api-access-fw5rh") pod "3c887b0d-3eeb-439d-b063-434410ef1337" (UID: "3c887b0d-3eeb-439d-b063-434410ef1337"). InnerVolumeSpecName "kube-api-access-fw5rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:35:06 crc kubenswrapper[4833]: I0219 13:35:06.625302 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c887b0d-3eeb-439d-b063-434410ef1337" (UID: "3c887b0d-3eeb-439d-b063-434410ef1337"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:35:06 crc kubenswrapper[4833]: I0219 13:35:06.665180 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw5rh\" (UniqueName: \"kubernetes.io/projected/3c887b0d-3eeb-439d-b063-434410ef1337-kube-api-access-fw5rh\") on node \"crc\" DevicePath \"\"" Feb 19 13:35:06 crc kubenswrapper[4833]: I0219 13:35:06.665207 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:35:06 crc kubenswrapper[4833]: I0219 13:35:06.665216 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c887b0d-3eeb-439d-b063-434410ef1337-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.020846 4833 generic.go:334] "Generic (PLEG): container finished" podID="3c887b0d-3eeb-439d-b063-434410ef1337" containerID="cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041" exitCode=0 Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.020916 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blf75" event={"ID":"3c887b0d-3eeb-439d-b063-434410ef1337","Type":"ContainerDied","Data":"cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041"} Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.021362 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-blf75" event={"ID":"3c887b0d-3eeb-439d-b063-434410ef1337","Type":"ContainerDied","Data":"c38fc616146c1717ea077cce7ce072489a784c7843daa0c4100367fa94e6b3d1"} Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.021408 4833 scope.go:117] "RemoveContainer" containerID="cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041" Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.020972 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-blf75" Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.055116 4833 scope.go:117] "RemoveContainer" containerID="8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5" Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.082462 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-blf75"] Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.101815 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-blf75"] Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.111307 4833 scope.go:117] "RemoveContainer" containerID="cd0a5bed93b24b75638d9dfcaf93571d1040a469f1b456e7cd74df87c1bf9c32" Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.160110 4833 scope.go:117] "RemoveContainer" containerID="cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041" Feb 19 13:35:07 crc kubenswrapper[4833]: E0219 13:35:07.160669 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041\": container with ID starting with cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041 not found: ID does not exist" containerID="cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041" Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.161070 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041"} err="failed to get container status \"cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041\": rpc error: code = NotFound desc = could not find container \"cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041\": container with ID starting with cc8bf1825849a019bc33cfa1a576f24a2a5f79d4a1cafb8f833cd33a54816041 not found: ID does not exist" Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.161108 4833 scope.go:117] "RemoveContainer" containerID="8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5" Feb 19 13:35:07 crc kubenswrapper[4833]: E0219 13:35:07.161631 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5\": container with ID starting with 8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5 not found: ID does not exist" containerID="8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5" Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.161681 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5"} err="failed to get container status \"8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5\": rpc error: code = NotFound desc = could not find container \"8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5\": container with ID starting with 8b67fc7e5cc859cd81ae1097f61d8a9d10e9729a3d73d874d5e50cedf641ecc5 not found: ID does not exist" Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.161718 4833 scope.go:117] "RemoveContainer" containerID="cd0a5bed93b24b75638d9dfcaf93571d1040a469f1b456e7cd74df87c1bf9c32" Feb 19 13:35:07 crc kubenswrapper[4833]: E0219 13:35:07.161983 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0a5bed93b24b75638d9dfcaf93571d1040a469f1b456e7cd74df87c1bf9c32\": container with ID starting with cd0a5bed93b24b75638d9dfcaf93571d1040a469f1b456e7cd74df87c1bf9c32 not found: ID does not exist" containerID="cd0a5bed93b24b75638d9dfcaf93571d1040a469f1b456e7cd74df87c1bf9c32" Feb 19 13:35:07 crc kubenswrapper[4833]: I0219 13:35:07.162010 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0a5bed93b24b75638d9dfcaf93571d1040a469f1b456e7cd74df87c1bf9c32"} err="failed to get container status \"cd0a5bed93b24b75638d9dfcaf93571d1040a469f1b456e7cd74df87c1bf9c32\": rpc error: code = NotFound desc = could not find container \"cd0a5bed93b24b75638d9dfcaf93571d1040a469f1b456e7cd74df87c1bf9c32\": container with ID starting with cd0a5bed93b24b75638d9dfcaf93571d1040a469f1b456e7cd74df87c1bf9c32 not found: ID does not exist" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.352460 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c887b0d-3eeb-439d-b063-434410ef1337" path="/var/lib/kubelet/pods/3c887b0d-3eeb-439d-b063-434410ef1337/volumes" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.733601 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n29nw"] Feb 19 13:35:08 crc kubenswrapper[4833]: E0219 13:35:08.734326 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c887b0d-3eeb-439d-b063-434410ef1337" containerName="extract-content" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.734345 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c887b0d-3eeb-439d-b063-434410ef1337" containerName="extract-content" Feb 19 13:35:08 crc kubenswrapper[4833]: E0219 13:35:08.734391 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c887b0d-3eeb-439d-b063-434410ef1337" containerName="registry-server" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.734399 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c887b0d-3eeb-439d-b063-434410ef1337" containerName="registry-server" Feb 19 13:35:08 crc kubenswrapper[4833]: E0219 13:35:08.734416 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c887b0d-3eeb-439d-b063-434410ef1337" containerName="extract-utilities" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.734425 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c887b0d-3eeb-439d-b063-434410ef1337" containerName="extract-utilities" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.734725 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c887b0d-3eeb-439d-b063-434410ef1337" containerName="registry-server" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.736375 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.754490 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n29nw"] Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.811224 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-utilities\") pod \"redhat-operators-n29nw\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.811294 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-catalog-content\") pod \"redhat-operators-n29nw\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.811377 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktx4c\" (UniqueName: \"kubernetes.io/projected/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-kube-api-access-ktx4c\") pod \"redhat-operators-n29nw\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.912790 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-utilities\") pod \"redhat-operators-n29nw\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.912830 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-catalog-content\") pod \"redhat-operators-n29nw\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.912873 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktx4c\" (UniqueName: \"kubernetes.io/projected/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-kube-api-access-ktx4c\") pod \"redhat-operators-n29nw\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.913357 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-utilities\") pod \"redhat-operators-n29nw\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.913287 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-catalog-content\") pod \"redhat-operators-n29nw\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:08 crc kubenswrapper[4833]: I0219 13:35:08.935292 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktx4c\" (UniqueName: \"kubernetes.io/projected/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-kube-api-access-ktx4c\") pod \"redhat-operators-n29nw\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:09 crc kubenswrapper[4833]: I0219 13:35:09.067686 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:09 crc kubenswrapper[4833]: I0219 13:35:09.499471 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n29nw"] Feb 19 13:35:09 crc kubenswrapper[4833]: W0219 13:35:09.501671 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6e18fd1_f2d2_459a_b31f_1fd5f70dec0b.slice/crio-dc859ea2079a6d6c74b6979157c6536376e1a4c7a82a9a939505b159b275e457 WatchSource:0}: Error finding container dc859ea2079a6d6c74b6979157c6536376e1a4c7a82a9a939505b159b275e457: Status 404 returned error can't find the container with id dc859ea2079a6d6c74b6979157c6536376e1a4c7a82a9a939505b159b275e457 Feb 19 13:35:10 crc kubenswrapper[4833]: I0219 13:35:10.048673 4833 generic.go:334] "Generic (PLEG): container finished" podID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerID="cae2c90fa4ccd538b2038f40cf415a3ae271cffa7d56194a05737d6ac704b265" exitCode=0 Feb 19 13:35:10 crc kubenswrapper[4833]: I0219 13:35:10.048888 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n29nw" event={"ID":"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b","Type":"ContainerDied","Data":"cae2c90fa4ccd538b2038f40cf415a3ae271cffa7d56194a05737d6ac704b265"} Feb 19 13:35:10 crc kubenswrapper[4833]: I0219 13:35:10.048911 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n29nw" event={"ID":"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b","Type":"ContainerStarted","Data":"dc859ea2079a6d6c74b6979157c6536376e1a4c7a82a9a939505b159b275e457"} Feb 19 13:35:12 crc kubenswrapper[4833]: I0219 13:35:12.070148 4833 generic.go:334] "Generic (PLEG): container finished" podID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerID="86bb7f9f049ee801d58238af5a1327ccfbc83e717205c247bc4ded0ebcbe7fb6" exitCode=0 Feb 19 13:35:12 crc kubenswrapper[4833]: I0219 13:35:12.070247 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n29nw" event={"ID":"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b","Type":"ContainerDied","Data":"86bb7f9f049ee801d58238af5a1327ccfbc83e717205c247bc4ded0ebcbe7fb6"} Feb 19 13:35:13 crc kubenswrapper[4833]: I0219 13:35:13.090192 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n29nw" event={"ID":"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b","Type":"ContainerStarted","Data":"c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8"} Feb 19 13:35:13 crc kubenswrapper[4833]: I0219 13:35:13.136317 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n29nw" podStartSLOduration=2.676165229 podStartE2EDuration="5.136286767s" podCreationTimestamp="2026-02-19 13:35:08 +0000 UTC" firstStartedPulling="2026-02-19 13:35:10.050147939 +0000 UTC m=+2920.445666707" lastFinishedPulling="2026-02-19 13:35:12.510269467 +0000 UTC m=+2922.905788245" observedRunningTime="2026-02-19 13:35:13.119450449 +0000 UTC m=+2923.514969257" watchObservedRunningTime="2026-02-19 13:35:13.136286767 +0000 UTC m=+2923.531805555" Feb 19 13:35:19 crc kubenswrapper[4833]: I0219 13:35:19.068322 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:19 crc kubenswrapper[4833]: I0219 13:35:19.069023 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:20 crc kubenswrapper[4833]: I0219 13:35:20.138346 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n29nw" podUID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerName="registry-server" probeResult="failure" output=< Feb 19 13:35:20 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 19 13:35:20 crc kubenswrapper[4833]: > Feb 19 13:35:29 crc kubenswrapper[4833]: I0219 13:35:29.135760 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:29 crc kubenswrapper[4833]: I0219 13:35:29.204488 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:29 crc kubenswrapper[4833]: I0219 13:35:29.383025 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n29nw"] Feb 19 13:35:30 crc kubenswrapper[4833]: I0219 13:35:30.268605 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n29nw" podUID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerName="registry-server" containerID="cri-o://c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8" gracePeriod=2 Feb 19 13:35:30 crc kubenswrapper[4833]: I0219 13:35:30.813245 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:30 crc kubenswrapper[4833]: I0219 13:35:30.929602 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-utilities\") pod \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " Feb 19 13:35:30 crc kubenswrapper[4833]: I0219 13:35:30.929755 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktx4c\" (UniqueName: \"kubernetes.io/projected/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-kube-api-access-ktx4c\") pod \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " Feb 19 13:35:30 crc kubenswrapper[4833]: I0219 13:35:30.930023 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-catalog-content\") pod \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\" (UID: \"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b\") " Feb 19 13:35:30 crc kubenswrapper[4833]: I0219 13:35:30.931083 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-utilities" (OuterVolumeSpecName: "utilities") pod "e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" (UID: "e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:35:30 crc kubenswrapper[4833]: I0219 13:35:30.942050 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-kube-api-access-ktx4c" (OuterVolumeSpecName: "kube-api-access-ktx4c") pod "e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" (UID: "e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b"). InnerVolumeSpecName "kube-api-access-ktx4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.033055 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.033133 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktx4c\" (UniqueName: \"kubernetes.io/projected/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-kube-api-access-ktx4c\") on node \"crc\" DevicePath \"\"" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.104081 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" (UID: "e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.135142 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.286603 4833 generic.go:334] "Generic (PLEG): container finished" podID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerID="c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8" exitCode=0 Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.286667 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n29nw" event={"ID":"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b","Type":"ContainerDied","Data":"c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8"} Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.286708 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n29nw" event={"ID":"e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b","Type":"ContainerDied","Data":"dc859ea2079a6d6c74b6979157c6536376e1a4c7a82a9a939505b159b275e457"} Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.286739 4833 scope.go:117] "RemoveContainer" containerID="c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.286764 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n29nw" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.336194 4833 scope.go:117] "RemoveContainer" containerID="86bb7f9f049ee801d58238af5a1327ccfbc83e717205c247bc4ded0ebcbe7fb6" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.341163 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n29nw"] Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.364109 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n29nw"] Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.375148 4833 scope.go:117] "RemoveContainer" containerID="cae2c90fa4ccd538b2038f40cf415a3ae271cffa7d56194a05737d6ac704b265" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.419023 4833 scope.go:117] "RemoveContainer" containerID="c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8" Feb 19 13:35:31 crc kubenswrapper[4833]: E0219 13:35:31.419384 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8\": container with ID starting with c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8 not found: ID does not exist" containerID="c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.419423 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8"} err="failed to get container status \"c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8\": rpc error: code = NotFound desc = could not find container \"c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8\": container with ID starting with c41916f412642b1f0c0ab908e7887d25e4353b7d4d21edefabb9c063db39f1c8 not found: ID does not exist" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.419450 4833 scope.go:117] "RemoveContainer" containerID="86bb7f9f049ee801d58238af5a1327ccfbc83e717205c247bc4ded0ebcbe7fb6" Feb 19 13:35:31 crc kubenswrapper[4833]: E0219 13:35:31.419750 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86bb7f9f049ee801d58238af5a1327ccfbc83e717205c247bc4ded0ebcbe7fb6\": container with ID starting with 86bb7f9f049ee801d58238af5a1327ccfbc83e717205c247bc4ded0ebcbe7fb6 not found: ID does not exist" containerID="86bb7f9f049ee801d58238af5a1327ccfbc83e717205c247bc4ded0ebcbe7fb6" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.419780 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86bb7f9f049ee801d58238af5a1327ccfbc83e717205c247bc4ded0ebcbe7fb6"} err="failed to get container status \"86bb7f9f049ee801d58238af5a1327ccfbc83e717205c247bc4ded0ebcbe7fb6\": rpc error: code = NotFound desc = could not find container \"86bb7f9f049ee801d58238af5a1327ccfbc83e717205c247bc4ded0ebcbe7fb6\": container with ID starting with 86bb7f9f049ee801d58238af5a1327ccfbc83e717205c247bc4ded0ebcbe7fb6 not found: ID does not exist" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.419800 4833 scope.go:117] "RemoveContainer" containerID="cae2c90fa4ccd538b2038f40cf415a3ae271cffa7d56194a05737d6ac704b265" Feb 19 13:35:31 crc kubenswrapper[4833]: E0219 13:35:31.420094 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cae2c90fa4ccd538b2038f40cf415a3ae271cffa7d56194a05737d6ac704b265\": container with ID starting with cae2c90fa4ccd538b2038f40cf415a3ae271cffa7d56194a05737d6ac704b265 not found: ID does not exist" containerID="cae2c90fa4ccd538b2038f40cf415a3ae271cffa7d56194a05737d6ac704b265" Feb 19 13:35:31 crc kubenswrapper[4833]: I0219 13:35:31.420116 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae2c90fa4ccd538b2038f40cf415a3ae271cffa7d56194a05737d6ac704b265"} err="failed to get container status \"cae2c90fa4ccd538b2038f40cf415a3ae271cffa7d56194a05737d6ac704b265\": rpc error: code = NotFound desc = could not find container \"cae2c90fa4ccd538b2038f40cf415a3ae271cffa7d56194a05737d6ac704b265\": container with ID starting with cae2c90fa4ccd538b2038f40cf415a3ae271cffa7d56194a05737d6ac704b265 not found: ID does not exist" Feb 19 13:35:32 crc kubenswrapper[4833]: I0219 13:35:32.333610 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" path="/var/lib/kubelet/pods/e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b/volumes" Feb 19 13:36:45 crc kubenswrapper[4833]: I0219 13:36:45.744572 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:36:45 crc kubenswrapper[4833]: I0219 13:36:45.745191 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:37:15 crc kubenswrapper[4833]: I0219 13:37:15.744194 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:37:15 crc kubenswrapper[4833]: I0219 13:37:15.744993 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:37:45 crc kubenswrapper[4833]: I0219 13:37:45.744162 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:37:45 crc kubenswrapper[4833]: I0219 13:37:45.744731 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:37:45 crc kubenswrapper[4833]: I0219 13:37:45.744777 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 13:37:45 crc kubenswrapper[4833]: I0219 13:37:45.745542 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:37:45 crc kubenswrapper[4833]: I0219 13:37:45.745591 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" gracePeriod=600 Feb 19 13:37:47 crc kubenswrapper[4833]: E0219 13:37:47.332609 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:37:47 crc kubenswrapper[4833]: I0219 13:37:47.851598 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" exitCode=0 Feb 19 13:37:47 crc kubenswrapper[4833]: I0219 13:37:47.851693 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a"} Feb 19 13:37:47 crc kubenswrapper[4833]: I0219 13:37:47.851919 4833 scope.go:117] "RemoveContainer" containerID="8ab6a38a220e1cc1992d0be3d5da374fe107c85f0ef6131e668a438d5db15f13" Feb 19 13:37:47 crc kubenswrapper[4833]: I0219 13:37:47.852581 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:37:47 crc kubenswrapper[4833]: E0219 13:37:47.852855 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:38:02 crc kubenswrapper[4833]: I0219 13:38:02.317539 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:38:02 crc kubenswrapper[4833]: E0219 13:38:02.318776 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:38:16 crc kubenswrapper[4833]: I0219 13:38:16.315108 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:38:16 crc kubenswrapper[4833]: E0219 13:38:16.316341 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:38:29 crc kubenswrapper[4833]: I0219 13:38:29.315667 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:38:29 crc kubenswrapper[4833]: E0219 13:38:29.316393 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:38:40 crc kubenswrapper[4833]: I0219 13:38:40.331398 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:38:40 crc kubenswrapper[4833]: E0219 13:38:40.332428 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:38:54 crc kubenswrapper[4833]: I0219 13:38:54.315302 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:38:54 crc kubenswrapper[4833]: E0219 13:38:54.317194 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:39:07 crc kubenswrapper[4833]: I0219 13:39:07.315120 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:39:07 crc kubenswrapper[4833]: E0219 13:39:07.316238 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:39:21 crc kubenswrapper[4833]: I0219 13:39:21.315663 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:39:21 crc kubenswrapper[4833]: E0219 13:39:21.316438 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:39:37 crc kubenswrapper[4833]: I0219 13:39:37.315774 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:39:37 crc kubenswrapper[4833]: E0219 13:39:37.316619 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:39:48 crc kubenswrapper[4833]: I0219 13:39:48.316595 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:39:48 crc kubenswrapper[4833]: E0219 13:39:48.319234 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:40:02 crc kubenswrapper[4833]: I0219 13:40:02.315598 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:40:02 crc kubenswrapper[4833]: E0219 13:40:02.317396 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:40:17 crc kubenswrapper[4833]: I0219 13:40:17.314909 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:40:17 crc kubenswrapper[4833]: E0219 13:40:17.315971 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:40:31 crc kubenswrapper[4833]: I0219 13:40:31.315564 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:40:31 crc kubenswrapper[4833]: E0219 13:40:31.316515 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:40:45 crc kubenswrapper[4833]: I0219 13:40:45.315779 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:40:45 crc kubenswrapper[4833]: E0219 13:40:45.316573 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:40:58 crc kubenswrapper[4833]: I0219 13:40:58.314940 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:40:58 crc kubenswrapper[4833]: E0219 13:40:58.315722 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:41:13 crc kubenswrapper[4833]: I0219 13:41:13.315387 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:41:13 crc kubenswrapper[4833]: E0219 13:41:13.316669 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:41:27 crc kubenswrapper[4833]: I0219 13:41:27.314789 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:41:27 crc kubenswrapper[4833]: E0219 13:41:27.315853 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:41:39 crc kubenswrapper[4833]: I0219 13:41:39.315600 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:41:39 crc kubenswrapper[4833]: E0219 13:41:39.317753 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:41:52 crc kubenswrapper[4833]: I0219 13:41:52.319013 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:41:52 crc kubenswrapper[4833]: E0219 13:41:52.319762 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.051643 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-b48fm"] Feb 19 13:41:59 crc kubenswrapper[4833]: E0219 13:41:59.052824 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerName="extract-utilities" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.052843 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerName="extract-utilities" Feb 19 13:41:59 crc kubenswrapper[4833]: E0219 13:41:59.052868 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerName="registry-server" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.052874 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerName="registry-server" Feb 19 13:41:59 crc kubenswrapper[4833]: E0219 13:41:59.052886 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerName="extract-content" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.052893 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerName="extract-content" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.053074 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e18fd1-f2d2-459a-b31f-1fd5f70dec0b" containerName="registry-server" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.054521 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.060282 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b48fm"] Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.087374 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-utilities\") pod \"community-operators-b48fm\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.087643 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-catalog-content\") pod \"community-operators-b48fm\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.087771 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsh2\" (UniqueName: \"kubernetes.io/projected/febd9efc-e479-46db-bd85-ff99153679e1-kube-api-access-nmsh2\") pod \"community-operators-b48fm\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.190136 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-catalog-content\") pod \"community-operators-b48fm\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.190248 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmsh2\" (UniqueName: \"kubernetes.io/projected/febd9efc-e479-46db-bd85-ff99153679e1-kube-api-access-nmsh2\") pod \"community-operators-b48fm\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.190418 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-utilities\") pod \"community-operators-b48fm\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.190742 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-catalog-content\") pod \"community-operators-b48fm\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.191018 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-utilities\") pod \"community-operators-b48fm\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.209176 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmsh2\" (UniqueName: \"kubernetes.io/projected/febd9efc-e479-46db-bd85-ff99153679e1-kube-api-access-nmsh2\") pod \"community-operators-b48fm\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.377401 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:41:59 crc kubenswrapper[4833]: I0219 13:41:59.961078 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-b48fm"] Feb 19 13:42:00 crc kubenswrapper[4833]: I0219 13:42:00.492125 4833 generic.go:334] "Generic (PLEG): container finished" podID="febd9efc-e479-46db-bd85-ff99153679e1" containerID="18772c7d6a0b19dfd388b9bcfb9e0d520cb0d83cc35f5d23579df6438deab955" exitCode=0 Feb 19 13:42:00 crc kubenswrapper[4833]: I0219 13:42:00.492175 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b48fm" event={"ID":"febd9efc-e479-46db-bd85-ff99153679e1","Type":"ContainerDied","Data":"18772c7d6a0b19dfd388b9bcfb9e0d520cb0d83cc35f5d23579df6438deab955"} Feb 19 13:42:00 crc kubenswrapper[4833]: I0219 13:42:00.492525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b48fm" event={"ID":"febd9efc-e479-46db-bd85-ff99153679e1","Type":"ContainerStarted","Data":"f02518e13c3f2628629948c180aa3617bee9c71b117dc8bb117e55333f29278b"} Feb 19 13:42:00 crc kubenswrapper[4833]: I0219 13:42:00.496492 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:42:00 crc kubenswrapper[4833]: I0219 13:42:00.844265 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pxldp"] Feb 19 13:42:00 crc kubenswrapper[4833]: I0219 13:42:00.846952 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:00 crc kubenswrapper[4833]: I0219 13:42:00.866882 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxldp"] Feb 19 13:42:00 crc kubenswrapper[4833]: I0219 13:42:00.938977 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-catalog-content\") pod \"redhat-marketplace-pxldp\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:00 crc kubenswrapper[4833]: I0219 13:42:00.939093 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-utilities\") pod \"redhat-marketplace-pxldp\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:00 crc kubenswrapper[4833]: I0219 13:42:00.939130 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghpp\" (UniqueName: \"kubernetes.io/projected/9bc9b89a-0319-473c-b933-329efc12f9a3-kube-api-access-bghpp\") pod \"redhat-marketplace-pxldp\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:01 crc kubenswrapper[4833]: I0219 13:42:01.040650 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-catalog-content\") pod \"redhat-marketplace-pxldp\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:01 crc kubenswrapper[4833]: I0219 13:42:01.040793 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-utilities\") pod \"redhat-marketplace-pxldp\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:01 crc kubenswrapper[4833]: I0219 13:42:01.040845 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bghpp\" (UniqueName: \"kubernetes.io/projected/9bc9b89a-0319-473c-b933-329efc12f9a3-kube-api-access-bghpp\") pod \"redhat-marketplace-pxldp\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:01 crc kubenswrapper[4833]: I0219 13:42:01.041302 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-catalog-content\") pod \"redhat-marketplace-pxldp\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:01 crc kubenswrapper[4833]: I0219 13:42:01.041310 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-utilities\") pod \"redhat-marketplace-pxldp\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:01 crc kubenswrapper[4833]: I0219 13:42:01.069174 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bghpp\" (UniqueName: \"kubernetes.io/projected/9bc9b89a-0319-473c-b933-329efc12f9a3-kube-api-access-bghpp\") pod \"redhat-marketplace-pxldp\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:01 crc kubenswrapper[4833]: I0219 13:42:01.186233 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:01 crc kubenswrapper[4833]: I0219 13:42:01.502840 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b48fm" event={"ID":"febd9efc-e479-46db-bd85-ff99153679e1","Type":"ContainerStarted","Data":"456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626"} Feb 19 13:42:01 crc kubenswrapper[4833]: I0219 13:42:01.651026 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxldp"] Feb 19 13:42:01 crc kubenswrapper[4833]: W0219 13:42:01.651183 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bc9b89a_0319_473c_b933_329efc12f9a3.slice/crio-a1c2e8f8780a65079fa5ddc84ab2d33ccd78e3471e741afe50d436f00b711780 WatchSource:0}: Error finding container a1c2e8f8780a65079fa5ddc84ab2d33ccd78e3471e741afe50d436f00b711780: Status 404 returned error can't find the container with id a1c2e8f8780a65079fa5ddc84ab2d33ccd78e3471e741afe50d436f00b711780 Feb 19 13:42:02 crc kubenswrapper[4833]: I0219 13:42:02.517539 4833 generic.go:334] "Generic (PLEG): container finished" podID="9bc9b89a-0319-473c-b933-329efc12f9a3" containerID="032cd91553a0cd761ff38695adba71c2d649b9c184f5b34a5add7fc2b1f3b964" exitCode=0 Feb 19 13:42:02 crc kubenswrapper[4833]: I0219 13:42:02.517622 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxldp" event={"ID":"9bc9b89a-0319-473c-b933-329efc12f9a3","Type":"ContainerDied","Data":"032cd91553a0cd761ff38695adba71c2d649b9c184f5b34a5add7fc2b1f3b964"} Feb 19 13:42:02 crc kubenswrapper[4833]: I0219 13:42:02.518138 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxldp" event={"ID":"9bc9b89a-0319-473c-b933-329efc12f9a3","Type":"ContainerStarted","Data":"a1c2e8f8780a65079fa5ddc84ab2d33ccd78e3471e741afe50d436f00b711780"} Feb 19 13:42:02 crc kubenswrapper[4833]: I0219 13:42:02.521948 4833 generic.go:334] "Generic (PLEG): container finished" podID="febd9efc-e479-46db-bd85-ff99153679e1" containerID="456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626" exitCode=0 Feb 19 13:42:02 crc kubenswrapper[4833]: I0219 13:42:02.521983 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b48fm" event={"ID":"febd9efc-e479-46db-bd85-ff99153679e1","Type":"ContainerDied","Data":"456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626"} Feb 19 13:42:03 crc kubenswrapper[4833]: I0219 13:42:03.314916 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:42:03 crc kubenswrapper[4833]: E0219 13:42:03.315508 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:42:03 crc kubenswrapper[4833]: I0219 13:42:03.531645 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b48fm" event={"ID":"febd9efc-e479-46db-bd85-ff99153679e1","Type":"ContainerStarted","Data":"5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a"} Feb 19 13:42:03 crc kubenswrapper[4833]: I0219 13:42:03.556356 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-b48fm" podStartSLOduration=2.135044501 podStartE2EDuration="4.556338542s" podCreationTimestamp="2026-02-19 13:41:59 +0000 UTC" firstStartedPulling="2026-02-19 13:42:00.496076533 +0000 UTC m=+3330.891595331" lastFinishedPulling="2026-02-19 13:42:02.917370554 +0000 UTC m=+3333.312889372" observedRunningTime="2026-02-19 13:42:03.549674875 +0000 UTC m=+3333.945193663" watchObservedRunningTime="2026-02-19 13:42:03.556338542 +0000 UTC m=+3333.951857310" Feb 19 13:42:04 crc kubenswrapper[4833]: I0219 13:42:04.542699 4833 generic.go:334] "Generic (PLEG): container finished" podID="9bc9b89a-0319-473c-b933-329efc12f9a3" containerID="fce2d95336adc2ef5631615eb96e031bff3d961770a3340e975dcb54ba32dcaf" exitCode=0 Feb 19 13:42:04 crc kubenswrapper[4833]: I0219 13:42:04.542912 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxldp" event={"ID":"9bc9b89a-0319-473c-b933-329efc12f9a3","Type":"ContainerDied","Data":"fce2d95336adc2ef5631615eb96e031bff3d961770a3340e975dcb54ba32dcaf"} Feb 19 13:42:05 crc kubenswrapper[4833]: I0219 13:42:05.553537 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxldp" event={"ID":"9bc9b89a-0319-473c-b933-329efc12f9a3","Type":"ContainerStarted","Data":"007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744"} Feb 19 13:42:05 crc kubenswrapper[4833]: I0219 13:42:05.575409 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pxldp" podStartSLOduration=3.15979746 podStartE2EDuration="5.57539278s" podCreationTimestamp="2026-02-19 13:42:00 +0000 UTC" firstStartedPulling="2026-02-19 13:42:02.520057141 +0000 UTC m=+3332.915575949" lastFinishedPulling="2026-02-19 13:42:04.935652461 +0000 UTC m=+3335.331171269" observedRunningTime="2026-02-19 13:42:05.572065471 +0000 UTC m=+3335.967584239" watchObservedRunningTime="2026-02-19 13:42:05.57539278 +0000 UTC m=+3335.970911548" Feb 19 13:42:09 crc kubenswrapper[4833]: I0219 13:42:09.378286 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:42:09 crc kubenswrapper[4833]: I0219 13:42:09.379026 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:42:09 crc kubenswrapper[4833]: I0219 13:42:09.438784 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:42:09 crc kubenswrapper[4833]: I0219 13:42:09.658657 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:42:10 crc kubenswrapper[4833]: I0219 13:42:10.447954 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b48fm"] Feb 19 13:42:11 crc kubenswrapper[4833]: I0219 13:42:11.186981 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:11 crc kubenswrapper[4833]: I0219 13:42:11.187114 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:11 crc kubenswrapper[4833]: I0219 13:42:11.262775 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:11 crc kubenswrapper[4833]: I0219 13:42:11.616847 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-b48fm" podUID="febd9efc-e479-46db-bd85-ff99153679e1" containerName="registry-server" containerID="cri-o://5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a" gracePeriod=2 Feb 19 13:42:11 crc kubenswrapper[4833]: I0219 13:42:11.694921 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.232917 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.276354 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-catalog-content\") pod \"febd9efc-e479-46db-bd85-ff99153679e1\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.276449 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-utilities\") pod \"febd9efc-e479-46db-bd85-ff99153679e1\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.276622 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmsh2\" (UniqueName: \"kubernetes.io/projected/febd9efc-e479-46db-bd85-ff99153679e1-kube-api-access-nmsh2\") pod \"febd9efc-e479-46db-bd85-ff99153679e1\" (UID: \"febd9efc-e479-46db-bd85-ff99153679e1\") " Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.280911 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-utilities" (OuterVolumeSpecName: "utilities") pod "febd9efc-e479-46db-bd85-ff99153679e1" (UID: "febd9efc-e479-46db-bd85-ff99153679e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.285458 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/febd9efc-e479-46db-bd85-ff99153679e1-kube-api-access-nmsh2" (OuterVolumeSpecName: "kube-api-access-nmsh2") pod "febd9efc-e479-46db-bd85-ff99153679e1" (UID: "febd9efc-e479-46db-bd85-ff99153679e1"). InnerVolumeSpecName "kube-api-access-nmsh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.339538 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "febd9efc-e479-46db-bd85-ff99153679e1" (UID: "febd9efc-e479-46db-bd85-ff99153679e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.378516 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.378542 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/febd9efc-e479-46db-bd85-ff99153679e1-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.378551 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmsh2\" (UniqueName: \"kubernetes.io/projected/febd9efc-e479-46db-bd85-ff99153679e1-kube-api-access-nmsh2\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.629413 4833 generic.go:334] "Generic (PLEG): container finished" podID="febd9efc-e479-46db-bd85-ff99153679e1" containerID="5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a" exitCode=0 Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.629485 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-b48fm" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.629525 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b48fm" event={"ID":"febd9efc-e479-46db-bd85-ff99153679e1","Type":"ContainerDied","Data":"5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a"} Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.629925 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-b48fm" event={"ID":"febd9efc-e479-46db-bd85-ff99153679e1","Type":"ContainerDied","Data":"f02518e13c3f2628629948c180aa3617bee9c71b117dc8bb117e55333f29278b"} Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.629950 4833 scope.go:117] "RemoveContainer" containerID="5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.652368 4833 scope.go:117] "RemoveContainer" containerID="456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.669742 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-b48fm"] Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.677634 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-b48fm"] Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.706822 4833 scope.go:117] "RemoveContainer" containerID="18772c7d6a0b19dfd388b9bcfb9e0d520cb0d83cc35f5d23579df6438deab955" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.751104 4833 scope.go:117] "RemoveContainer" containerID="5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a" Feb 19 13:42:12 crc kubenswrapper[4833]: E0219 13:42:12.751754 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a\": container with ID starting with 5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a not found: ID does not exist" containerID="5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.751790 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a"} err="failed to get container status \"5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a\": rpc error: code = NotFound desc = could not find container \"5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a\": container with ID starting with 5ac458882d646da15b83a8ad7d40e6125060369f87a9221075c28eaad248975a not found: ID does not exist" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.751813 4833 scope.go:117] "RemoveContainer" containerID="456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626" Feb 19 13:42:12 crc kubenswrapper[4833]: E0219 13:42:12.752198 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626\": container with ID starting with 456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626 not found: ID does not exist" containerID="456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.752226 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626"} err="failed to get container status \"456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626\": rpc error: code = NotFound desc = could not find container \"456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626\": container with ID starting with 456e1a867c0d662f87300d098c36a141a614972165087befec46a88489f58626 not found: ID does not exist" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.752244 4833 scope.go:117] "RemoveContainer" containerID="18772c7d6a0b19dfd388b9bcfb9e0d520cb0d83cc35f5d23579df6438deab955" Feb 19 13:42:12 crc kubenswrapper[4833]: E0219 13:42:12.752552 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18772c7d6a0b19dfd388b9bcfb9e0d520cb0d83cc35f5d23579df6438deab955\": container with ID starting with 18772c7d6a0b19dfd388b9bcfb9e0d520cb0d83cc35f5d23579df6438deab955 not found: ID does not exist" containerID="18772c7d6a0b19dfd388b9bcfb9e0d520cb0d83cc35f5d23579df6438deab955" Feb 19 13:42:12 crc kubenswrapper[4833]: I0219 13:42:12.752583 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18772c7d6a0b19dfd388b9bcfb9e0d520cb0d83cc35f5d23579df6438deab955"} err="failed to get container status \"18772c7d6a0b19dfd388b9bcfb9e0d520cb0d83cc35f5d23579df6438deab955\": rpc error: code = NotFound desc = could not find container \"18772c7d6a0b19dfd388b9bcfb9e0d520cb0d83cc35f5d23579df6438deab955\": container with ID starting with 18772c7d6a0b19dfd388b9bcfb9e0d520cb0d83cc35f5d23579df6438deab955 not found: ID does not exist" Feb 19 13:42:13 crc kubenswrapper[4833]: I0219 13:42:13.232191 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxldp"] Feb 19 13:42:13 crc kubenswrapper[4833]: I0219 13:42:13.643576 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pxldp" podUID="9bc9b89a-0319-473c-b933-329efc12f9a3" containerName="registry-server" containerID="cri-o://007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744" gracePeriod=2 Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.152679 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.211251 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-utilities\") pod \"9bc9b89a-0319-473c-b933-329efc12f9a3\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.211628 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-catalog-content\") pod \"9bc9b89a-0319-473c-b933-329efc12f9a3\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.211735 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bghpp\" (UniqueName: \"kubernetes.io/projected/9bc9b89a-0319-473c-b933-329efc12f9a3-kube-api-access-bghpp\") pod \"9bc9b89a-0319-473c-b933-329efc12f9a3\" (UID: \"9bc9b89a-0319-473c-b933-329efc12f9a3\") " Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.212521 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-utilities" (OuterVolumeSpecName: "utilities") pod "9bc9b89a-0319-473c-b933-329efc12f9a3" (UID: "9bc9b89a-0319-473c-b933-329efc12f9a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.217669 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc9b89a-0319-473c-b933-329efc12f9a3-kube-api-access-bghpp" (OuterVolumeSpecName: "kube-api-access-bghpp") pod "9bc9b89a-0319-473c-b933-329efc12f9a3" (UID: "9bc9b89a-0319-473c-b933-329efc12f9a3"). InnerVolumeSpecName "kube-api-access-bghpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.245195 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bc9b89a-0319-473c-b933-329efc12f9a3" (UID: "9bc9b89a-0319-473c-b933-329efc12f9a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.314570 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.314619 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bc9b89a-0319-473c-b933-329efc12f9a3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.314641 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bghpp\" (UniqueName: \"kubernetes.io/projected/9bc9b89a-0319-473c-b933-329efc12f9a3-kube-api-access-bghpp\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.331729 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="febd9efc-e479-46db-bd85-ff99153679e1" path="/var/lib/kubelet/pods/febd9efc-e479-46db-bd85-ff99153679e1/volumes" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.669079 4833 generic.go:334] "Generic (PLEG): container finished" podID="9bc9b89a-0319-473c-b933-329efc12f9a3" containerID="007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744" exitCode=0 Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.669123 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxldp" event={"ID":"9bc9b89a-0319-473c-b933-329efc12f9a3","Type":"ContainerDied","Data":"007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744"} Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.669153 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxldp" event={"ID":"9bc9b89a-0319-473c-b933-329efc12f9a3","Type":"ContainerDied","Data":"a1c2e8f8780a65079fa5ddc84ab2d33ccd78e3471e741afe50d436f00b711780"} Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.669171 4833 scope.go:117] "RemoveContainer" containerID="007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.669201 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxldp" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.704123 4833 scope.go:117] "RemoveContainer" containerID="fce2d95336adc2ef5631615eb96e031bff3d961770a3340e975dcb54ba32dcaf" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.705063 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxldp"] Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.719024 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxldp"] Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.734166 4833 scope.go:117] "RemoveContainer" containerID="032cd91553a0cd761ff38695adba71c2d649b9c184f5b34a5add7fc2b1f3b964" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.782011 4833 scope.go:117] "RemoveContainer" containerID="007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744" Feb 19 13:42:14 crc kubenswrapper[4833]: E0219 13:42:14.782540 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744\": container with ID starting with 007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744 not found: ID does not exist" containerID="007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.782565 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744"} err="failed to get container status \"007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744\": rpc error: code = NotFound desc = could not find container \"007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744\": container with ID starting with 007a48a0870e9cac69670032e6bd7ba08c67ec2d1da8d385af4672dc1076e744 not found: ID does not exist" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.782584 4833 scope.go:117] "RemoveContainer" containerID="fce2d95336adc2ef5631615eb96e031bff3d961770a3340e975dcb54ba32dcaf" Feb 19 13:42:14 crc kubenswrapper[4833]: E0219 13:42:14.782999 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce2d95336adc2ef5631615eb96e031bff3d961770a3340e975dcb54ba32dcaf\": container with ID starting with fce2d95336adc2ef5631615eb96e031bff3d961770a3340e975dcb54ba32dcaf not found: ID does not exist" containerID="fce2d95336adc2ef5631615eb96e031bff3d961770a3340e975dcb54ba32dcaf" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.783019 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce2d95336adc2ef5631615eb96e031bff3d961770a3340e975dcb54ba32dcaf"} err="failed to get container status \"fce2d95336adc2ef5631615eb96e031bff3d961770a3340e975dcb54ba32dcaf\": rpc error: code = NotFound desc = could not find container \"fce2d95336adc2ef5631615eb96e031bff3d961770a3340e975dcb54ba32dcaf\": container with ID starting with fce2d95336adc2ef5631615eb96e031bff3d961770a3340e975dcb54ba32dcaf not found: ID does not exist" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.783035 4833 scope.go:117] "RemoveContainer" containerID="032cd91553a0cd761ff38695adba71c2d649b9c184f5b34a5add7fc2b1f3b964" Feb 19 13:42:14 crc kubenswrapper[4833]: E0219 13:42:14.783411 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"032cd91553a0cd761ff38695adba71c2d649b9c184f5b34a5add7fc2b1f3b964\": container with ID starting with 032cd91553a0cd761ff38695adba71c2d649b9c184f5b34a5add7fc2b1f3b964 not found: ID does not exist" containerID="032cd91553a0cd761ff38695adba71c2d649b9c184f5b34a5add7fc2b1f3b964" Feb 19 13:42:14 crc kubenswrapper[4833]: I0219 13:42:14.783474 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"032cd91553a0cd761ff38695adba71c2d649b9c184f5b34a5add7fc2b1f3b964"} err="failed to get container status \"032cd91553a0cd761ff38695adba71c2d649b9c184f5b34a5add7fc2b1f3b964\": rpc error: code = NotFound desc = could not find container \"032cd91553a0cd761ff38695adba71c2d649b9c184f5b34a5add7fc2b1f3b964\": container with ID starting with 032cd91553a0cd761ff38695adba71c2d649b9c184f5b34a5add7fc2b1f3b964 not found: ID does not exist" Feb 19 13:42:16 crc kubenswrapper[4833]: I0219 13:42:16.328195 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc9b89a-0319-473c-b933-329efc12f9a3" path="/var/lib/kubelet/pods/9bc9b89a-0319-473c-b933-329efc12f9a3/volumes" Feb 19 13:42:17 crc kubenswrapper[4833]: I0219 13:42:17.314795 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:42:17 crc kubenswrapper[4833]: E0219 13:42:17.315266 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:42:31 crc kubenswrapper[4833]: I0219 13:42:31.315242 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:42:31 crc kubenswrapper[4833]: E0219 13:42:31.316190 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:42:33 crc kubenswrapper[4833]: I0219 13:42:33.870945 4833 generic.go:334] "Generic (PLEG): container finished" podID="fbca1583-1d12-4e49-bda3-864536093e85" containerID="f6e272ab2c214823f3e1379dc3adac62d28bfa2e27495fe7e695ad28bc055358" exitCode=0 Feb 19 13:42:33 crc kubenswrapper[4833]: I0219 13:42:33.871076 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fbca1583-1d12-4e49-bda3-864536093e85","Type":"ContainerDied","Data":"f6e272ab2c214823f3e1379dc3adac62d28bfa2e27495fe7e695ad28bc055358"} Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.432773 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.470182 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-config-data\") pod \"fbca1583-1d12-4e49-bda3-864536093e85\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.471024 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-config-data" (OuterVolumeSpecName: "config-data") pod "fbca1583-1d12-4e49-bda3-864536093e85" (UID: "fbca1583-1d12-4e49-bda3-864536093e85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.471151 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config-secret\") pod \"fbca1583-1d12-4e49-bda3-864536093e85\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.471184 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config\") pod \"fbca1583-1d12-4e49-bda3-864536093e85\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.471789 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpkqb\" (UniqueName: \"kubernetes.io/projected/fbca1583-1d12-4e49-bda3-864536093e85-kube-api-access-bpkqb\") pod \"fbca1583-1d12-4e49-bda3-864536093e85\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.471820 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"fbca1583-1d12-4e49-bda3-864536093e85\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.471911 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-workdir\") pod \"fbca1583-1d12-4e49-bda3-864536093e85\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.471950 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ssh-key\") pod \"fbca1583-1d12-4e49-bda3-864536093e85\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.471975 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-temporary\") pod \"fbca1583-1d12-4e49-bda3-864536093e85\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.472014 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ca-certs\") pod \"fbca1583-1d12-4e49-bda3-864536093e85\" (UID: \"fbca1583-1d12-4e49-bda3-864536093e85\") " Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.472418 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.476271 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "fbca1583-1d12-4e49-bda3-864536093e85" (UID: "fbca1583-1d12-4e49-bda3-864536093e85"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.479457 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "fbca1583-1d12-4e49-bda3-864536093e85" (UID: "fbca1583-1d12-4e49-bda3-864536093e85"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.490643 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbca1583-1d12-4e49-bda3-864536093e85-kube-api-access-bpkqb" (OuterVolumeSpecName: "kube-api-access-bpkqb") pod "fbca1583-1d12-4e49-bda3-864536093e85" (UID: "fbca1583-1d12-4e49-bda3-864536093e85"). InnerVolumeSpecName "kube-api-access-bpkqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.490714 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "fbca1583-1d12-4e49-bda3-864536093e85" (UID: "fbca1583-1d12-4e49-bda3-864536093e85"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.502852 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fbca1583-1d12-4e49-bda3-864536093e85" (UID: "fbca1583-1d12-4e49-bda3-864536093e85"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.505198 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "fbca1583-1d12-4e49-bda3-864536093e85" (UID: "fbca1583-1d12-4e49-bda3-864536093e85"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.515588 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "fbca1583-1d12-4e49-bda3-864536093e85" (UID: "fbca1583-1d12-4e49-bda3-864536093e85"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.546723 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "fbca1583-1d12-4e49-bda3-864536093e85" (UID: "fbca1583-1d12-4e49-bda3-864536093e85"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.573751 4833 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.573782 4833 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.573795 4833 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/fbca1583-1d12-4e49-bda3-864536093e85-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.573806 4833 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.573817 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.573828 4833 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/fbca1583-1d12-4e49-bda3-864536093e85-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.573841 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpkqb\" (UniqueName: \"kubernetes.io/projected/fbca1583-1d12-4e49-bda3-864536093e85-kube-api-access-bpkqb\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.573880 4833 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.592953 4833 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.675204 4833 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.897923 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"fbca1583-1d12-4e49-bda3-864536093e85","Type":"ContainerDied","Data":"04730a653ad9866d446b2234606538024e1090eb0cb22c93902f0b0383dac4de"} Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.897983 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04730a653ad9866d446b2234606538024e1090eb0cb22c93902f0b0383dac4de" Feb 19 13:42:35 crc kubenswrapper[4833]: I0219 13:42:35.898006 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 19 13:42:42 crc kubenswrapper[4833]: I0219 13:42:42.315414 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:42:42 crc kubenswrapper[4833]: E0219 13:42:42.316443 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.466670 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 13:42:43 crc kubenswrapper[4833]: E0219 13:42:43.467486 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febd9efc-e479-46db-bd85-ff99153679e1" containerName="extract-content" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.467523 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="febd9efc-e479-46db-bd85-ff99153679e1" containerName="extract-content" Feb 19 13:42:43 crc kubenswrapper[4833]: E0219 13:42:43.467534 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbca1583-1d12-4e49-bda3-864536093e85" containerName="tempest-tests-tempest-tests-runner" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.467545 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbca1583-1d12-4e49-bda3-864536093e85" containerName="tempest-tests-tempest-tests-runner" Feb 19 13:42:43 crc kubenswrapper[4833]: E0219 13:42:43.467554 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc9b89a-0319-473c-b933-329efc12f9a3" containerName="extract-utilities" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.467562 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc9b89a-0319-473c-b933-329efc12f9a3" containerName="extract-utilities" Feb 19 13:42:43 crc kubenswrapper[4833]: E0219 13:42:43.467574 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febd9efc-e479-46db-bd85-ff99153679e1" containerName="registry-server" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.467597 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="febd9efc-e479-46db-bd85-ff99153679e1" containerName="registry-server" Feb 19 13:42:43 crc kubenswrapper[4833]: E0219 13:42:43.467621 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc9b89a-0319-473c-b933-329efc12f9a3" containerName="registry-server" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.467629 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc9b89a-0319-473c-b933-329efc12f9a3" containerName="registry-server" Feb 19 13:42:43 crc kubenswrapper[4833]: E0219 13:42:43.467643 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="febd9efc-e479-46db-bd85-ff99153679e1" containerName="extract-utilities" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.467650 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="febd9efc-e479-46db-bd85-ff99153679e1" containerName="extract-utilities" Feb 19 13:42:43 crc kubenswrapper[4833]: E0219 13:42:43.467662 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc9b89a-0319-473c-b933-329efc12f9a3" containerName="extract-content" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.467669 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc9b89a-0319-473c-b933-329efc12f9a3" containerName="extract-content" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.467885 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc9b89a-0319-473c-b933-329efc12f9a3" containerName="registry-server" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.467901 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbca1583-1d12-4e49-bda3-864536093e85" containerName="tempest-tests-tempest-tests-runner" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.467932 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="febd9efc-e479-46db-bd85-ff99153679e1" containerName="registry-server" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.468710 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.473588 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-4g9nf" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.477268 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.640193 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"398ff1ce-0aa5-4f20-9ff3-e21807d5771c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.640353 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqvq8\" (UniqueName: \"kubernetes.io/projected/398ff1ce-0aa5-4f20-9ff3-e21807d5771c-kube-api-access-lqvq8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"398ff1ce-0aa5-4f20-9ff3-e21807d5771c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.742248 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqvq8\" (UniqueName: \"kubernetes.io/projected/398ff1ce-0aa5-4f20-9ff3-e21807d5771c-kube-api-access-lqvq8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"398ff1ce-0aa5-4f20-9ff3-e21807d5771c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.742342 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"398ff1ce-0aa5-4f20-9ff3-e21807d5771c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.742768 4833 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"398ff1ce-0aa5-4f20-9ff3-e21807d5771c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.774290 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqvq8\" (UniqueName: \"kubernetes.io/projected/398ff1ce-0aa5-4f20-9ff3-e21807d5771c-kube-api-access-lqvq8\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"398ff1ce-0aa5-4f20-9ff3-e21807d5771c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.776319 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"398ff1ce-0aa5-4f20-9ff3-e21807d5771c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 13:42:43 crc kubenswrapper[4833]: I0219 13:42:43.803378 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 19 13:42:44 crc kubenswrapper[4833]: I0219 13:42:44.252548 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 19 13:42:44 crc kubenswrapper[4833]: W0219 13:42:44.252641 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod398ff1ce_0aa5_4f20_9ff3_e21807d5771c.slice/crio-a8e7837e7ebd8032699823550fff4b9787ae385211f6a2031d1a5e4ad8f2ddf4 WatchSource:0}: Error finding container a8e7837e7ebd8032699823550fff4b9787ae385211f6a2031d1a5e4ad8f2ddf4: Status 404 returned error can't find the container with id a8e7837e7ebd8032699823550fff4b9787ae385211f6a2031d1a5e4ad8f2ddf4 Feb 19 13:42:45 crc kubenswrapper[4833]: I0219 13:42:45.017254 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"398ff1ce-0aa5-4f20-9ff3-e21807d5771c","Type":"ContainerStarted","Data":"a8e7837e7ebd8032699823550fff4b9787ae385211f6a2031d1a5e4ad8f2ddf4"} Feb 19 13:42:46 crc kubenswrapper[4833]: I0219 13:42:46.031393 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"398ff1ce-0aa5-4f20-9ff3-e21807d5771c","Type":"ContainerStarted","Data":"2748a00429e2c880dc15e2610ee0bf6bf02cd311f1c36d96d1dbcd3efc5864f0"} Feb 19 13:42:46 crc kubenswrapper[4833]: I0219 13:42:46.058140 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.26768269 podStartE2EDuration="3.058116306s" podCreationTimestamp="2026-02-19 13:42:43 +0000 UTC" firstStartedPulling="2026-02-19 13:42:44.255175575 +0000 UTC m=+3374.650694353" lastFinishedPulling="2026-02-19 13:42:45.045609201 +0000 UTC m=+3375.441127969" observedRunningTime="2026-02-19 13:42:46.051090809 +0000 UTC m=+3376.446609597" watchObservedRunningTime="2026-02-19 13:42:46.058116306 +0000 UTC m=+3376.453635084" Feb 19 13:42:53 crc kubenswrapper[4833]: I0219 13:42:53.315219 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:42:54 crc kubenswrapper[4833]: I0219 13:42:54.112540 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"dadbbd04deb7bfa91640633352760633123f4c60efb49cdd6b578eff95c9ffcc"} Feb 19 13:43:05 crc kubenswrapper[4833]: I0219 13:43:05.934558 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wbwwr/must-gather-j95pl"] Feb 19 13:43:05 crc kubenswrapper[4833]: I0219 13:43:05.937730 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/must-gather-j95pl" Feb 19 13:43:05 crc kubenswrapper[4833]: I0219 13:43:05.940289 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wbwwr"/"openshift-service-ca.crt" Feb 19 13:43:05 crc kubenswrapper[4833]: I0219 13:43:05.940285 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wbwwr"/"kube-root-ca.crt" Feb 19 13:43:05 crc kubenswrapper[4833]: I0219 13:43:05.985285 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g77bx\" (UniqueName: \"kubernetes.io/projected/fb08c1f7-3c4c-4589-94be-936811a6f919-kube-api-access-g77bx\") pod \"must-gather-j95pl\" (UID: \"fb08c1f7-3c4c-4589-94be-936811a6f919\") " pod="openshift-must-gather-wbwwr/must-gather-j95pl" Feb 19 13:43:05 crc kubenswrapper[4833]: I0219 13:43:05.985376 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb08c1f7-3c4c-4589-94be-936811a6f919-must-gather-output\") pod \"must-gather-j95pl\" (UID: \"fb08c1f7-3c4c-4589-94be-936811a6f919\") " pod="openshift-must-gather-wbwwr/must-gather-j95pl" Feb 19 13:43:06 crc kubenswrapper[4833]: I0219 13:43:06.013055 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wbwwr/must-gather-j95pl"] Feb 19 13:43:06 crc kubenswrapper[4833]: I0219 13:43:06.087483 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g77bx\" (UniqueName: \"kubernetes.io/projected/fb08c1f7-3c4c-4589-94be-936811a6f919-kube-api-access-g77bx\") pod \"must-gather-j95pl\" (UID: \"fb08c1f7-3c4c-4589-94be-936811a6f919\") " pod="openshift-must-gather-wbwwr/must-gather-j95pl" Feb 19 13:43:06 crc kubenswrapper[4833]: I0219 13:43:06.087613 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb08c1f7-3c4c-4589-94be-936811a6f919-must-gather-output\") pod \"must-gather-j95pl\" (UID: \"fb08c1f7-3c4c-4589-94be-936811a6f919\") " pod="openshift-must-gather-wbwwr/must-gather-j95pl" Feb 19 13:43:06 crc kubenswrapper[4833]: I0219 13:43:06.088077 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb08c1f7-3c4c-4589-94be-936811a6f919-must-gather-output\") pod \"must-gather-j95pl\" (UID: \"fb08c1f7-3c4c-4589-94be-936811a6f919\") " pod="openshift-must-gather-wbwwr/must-gather-j95pl" Feb 19 13:43:06 crc kubenswrapper[4833]: I0219 13:43:06.106575 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g77bx\" (UniqueName: \"kubernetes.io/projected/fb08c1f7-3c4c-4589-94be-936811a6f919-kube-api-access-g77bx\") pod \"must-gather-j95pl\" (UID: \"fb08c1f7-3c4c-4589-94be-936811a6f919\") " pod="openshift-must-gather-wbwwr/must-gather-j95pl" Feb 19 13:43:06 crc kubenswrapper[4833]: I0219 13:43:06.261793 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/must-gather-j95pl" Feb 19 13:43:06 crc kubenswrapper[4833]: W0219 13:43:06.764334 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb08c1f7_3c4c_4589_94be_936811a6f919.slice/crio-1f5ae1b88fb59ff8ede005b31dd0cfe73c79c18614761db7d32958ddcb4300b3 WatchSource:0}: Error finding container 1f5ae1b88fb59ff8ede005b31dd0cfe73c79c18614761db7d32958ddcb4300b3: Status 404 returned error can't find the container with id 1f5ae1b88fb59ff8ede005b31dd0cfe73c79c18614761db7d32958ddcb4300b3 Feb 19 13:43:06 crc kubenswrapper[4833]: I0219 13:43:06.764576 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wbwwr/must-gather-j95pl"] Feb 19 13:43:07 crc kubenswrapper[4833]: I0219 13:43:07.254698 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wbwwr/must-gather-j95pl" event={"ID":"fb08c1f7-3c4c-4589-94be-936811a6f919","Type":"ContainerStarted","Data":"1f5ae1b88fb59ff8ede005b31dd0cfe73c79c18614761db7d32958ddcb4300b3"} Feb 19 13:43:13 crc kubenswrapper[4833]: I0219 13:43:13.313178 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wbwwr/must-gather-j95pl" event={"ID":"fb08c1f7-3c4c-4589-94be-936811a6f919","Type":"ContainerStarted","Data":"2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51"} Feb 19 13:43:14 crc kubenswrapper[4833]: I0219 13:43:14.329225 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wbwwr/must-gather-j95pl" event={"ID":"fb08c1f7-3c4c-4589-94be-936811a6f919","Type":"ContainerStarted","Data":"5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f"} Feb 19 13:43:14 crc kubenswrapper[4833]: I0219 13:43:14.355314 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wbwwr/must-gather-j95pl" podStartSLOduration=3.397082343 podStartE2EDuration="9.355294161s" podCreationTimestamp="2026-02-19 13:43:05 +0000 UTC" firstStartedPulling="2026-02-19 13:43:06.766669532 +0000 UTC m=+3397.162188300" lastFinishedPulling="2026-02-19 13:43:12.72488135 +0000 UTC m=+3403.120400118" observedRunningTime="2026-02-19 13:43:14.344064042 +0000 UTC m=+3404.739582820" watchObservedRunningTime="2026-02-19 13:43:14.355294161 +0000 UTC m=+3404.750812949" Feb 19 13:43:16 crc kubenswrapper[4833]: I0219 13:43:16.726379 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wbwwr/crc-debug-v5vdk"] Feb 19 13:43:16 crc kubenswrapper[4833]: I0219 13:43:16.728062 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" Feb 19 13:43:16 crc kubenswrapper[4833]: I0219 13:43:16.730275 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wbwwr"/"default-dockercfg-6qc25" Feb 19 13:43:16 crc kubenswrapper[4833]: I0219 13:43:16.848779 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhdtd\" (UniqueName: \"kubernetes.io/projected/38bf4d77-20b8-4319-a6dc-7334c13dc91f-kube-api-access-mhdtd\") pod \"crc-debug-v5vdk\" (UID: \"38bf4d77-20b8-4319-a6dc-7334c13dc91f\") " pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" Feb 19 13:43:16 crc kubenswrapper[4833]: I0219 13:43:16.848952 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38bf4d77-20b8-4319-a6dc-7334c13dc91f-host\") pod \"crc-debug-v5vdk\" (UID: \"38bf4d77-20b8-4319-a6dc-7334c13dc91f\") " pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" Feb 19 13:43:16 crc kubenswrapper[4833]: I0219 13:43:16.951197 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhdtd\" (UniqueName: \"kubernetes.io/projected/38bf4d77-20b8-4319-a6dc-7334c13dc91f-kube-api-access-mhdtd\") pod \"crc-debug-v5vdk\" (UID: \"38bf4d77-20b8-4319-a6dc-7334c13dc91f\") " pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" Feb 19 13:43:16 crc kubenswrapper[4833]: I0219 13:43:16.951297 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38bf4d77-20b8-4319-a6dc-7334c13dc91f-host\") pod \"crc-debug-v5vdk\" (UID: \"38bf4d77-20b8-4319-a6dc-7334c13dc91f\") " pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" Feb 19 13:43:16 crc kubenswrapper[4833]: I0219 13:43:16.951452 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38bf4d77-20b8-4319-a6dc-7334c13dc91f-host\") pod \"crc-debug-v5vdk\" (UID: \"38bf4d77-20b8-4319-a6dc-7334c13dc91f\") " pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" Feb 19 13:43:16 crc kubenswrapper[4833]: I0219 13:43:16.970089 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhdtd\" (UniqueName: \"kubernetes.io/projected/38bf4d77-20b8-4319-a6dc-7334c13dc91f-kube-api-access-mhdtd\") pod \"crc-debug-v5vdk\" (UID: \"38bf4d77-20b8-4319-a6dc-7334c13dc91f\") " pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" Feb 19 13:43:17 crc kubenswrapper[4833]: I0219 13:43:17.046875 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" Feb 19 13:43:17 crc kubenswrapper[4833]: I0219 13:43:17.358409 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" event={"ID":"38bf4d77-20b8-4319-a6dc-7334c13dc91f","Type":"ContainerStarted","Data":"33b0919d36f915d609db0a970d8e2aaa04ba75a716d6453faed1195396d551b9"} Feb 19 13:43:28 crc kubenswrapper[4833]: I0219 13:43:28.458982 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" event={"ID":"38bf4d77-20b8-4319-a6dc-7334c13dc91f","Type":"ContainerStarted","Data":"7f66495b4cf455fc1702c0afd83fd6bc3df83b3ba55574bbb26e26308aa8590c"} Feb 19 13:43:28 crc kubenswrapper[4833]: I0219 13:43:28.472172 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" podStartSLOduration=1.502223092 podStartE2EDuration="12.472158719s" podCreationTimestamp="2026-02-19 13:43:16 +0000 UTC" firstStartedPulling="2026-02-19 13:43:17.087787882 +0000 UTC m=+3407.483306650" lastFinishedPulling="2026-02-19 13:43:28.057723509 +0000 UTC m=+3418.453242277" observedRunningTime="2026-02-19 13:43:28.470085424 +0000 UTC m=+3418.865604192" watchObservedRunningTime="2026-02-19 13:43:28.472158719 +0000 UTC m=+3418.867677487" Feb 19 13:44:05 crc kubenswrapper[4833]: I0219 13:44:05.780453 4833 generic.go:334] "Generic (PLEG): container finished" podID="38bf4d77-20b8-4319-a6dc-7334c13dc91f" containerID="7f66495b4cf455fc1702c0afd83fd6bc3df83b3ba55574bbb26e26308aa8590c" exitCode=0 Feb 19 13:44:05 crc kubenswrapper[4833]: I0219 13:44:05.780620 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" event={"ID":"38bf4d77-20b8-4319-a6dc-7334c13dc91f","Type":"ContainerDied","Data":"7f66495b4cf455fc1702c0afd83fd6bc3df83b3ba55574bbb26e26308aa8590c"} Feb 19 13:44:06 crc kubenswrapper[4833]: I0219 13:44:06.915559 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" Feb 19 13:44:06 crc kubenswrapper[4833]: I0219 13:44:06.957908 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wbwwr/crc-debug-v5vdk"] Feb 19 13:44:06 crc kubenswrapper[4833]: I0219 13:44:06.969065 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wbwwr/crc-debug-v5vdk"] Feb 19 13:44:07 crc kubenswrapper[4833]: I0219 13:44:07.092939 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhdtd\" (UniqueName: \"kubernetes.io/projected/38bf4d77-20b8-4319-a6dc-7334c13dc91f-kube-api-access-mhdtd\") pod \"38bf4d77-20b8-4319-a6dc-7334c13dc91f\" (UID: \"38bf4d77-20b8-4319-a6dc-7334c13dc91f\") " Feb 19 13:44:07 crc kubenswrapper[4833]: I0219 13:44:07.093276 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38bf4d77-20b8-4319-a6dc-7334c13dc91f-host\") pod \"38bf4d77-20b8-4319-a6dc-7334c13dc91f\" (UID: \"38bf4d77-20b8-4319-a6dc-7334c13dc91f\") " Feb 19 13:44:07 crc kubenswrapper[4833]: I0219 13:44:07.093884 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38bf4d77-20b8-4319-a6dc-7334c13dc91f-host" (OuterVolumeSpecName: "host") pod "38bf4d77-20b8-4319-a6dc-7334c13dc91f" (UID: "38bf4d77-20b8-4319-a6dc-7334c13dc91f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:44:07 crc kubenswrapper[4833]: I0219 13:44:07.100465 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38bf4d77-20b8-4319-a6dc-7334c13dc91f-kube-api-access-mhdtd" (OuterVolumeSpecName: "kube-api-access-mhdtd") pod "38bf4d77-20b8-4319-a6dc-7334c13dc91f" (UID: "38bf4d77-20b8-4319-a6dc-7334c13dc91f"). InnerVolumeSpecName "kube-api-access-mhdtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:44:07 crc kubenswrapper[4833]: I0219 13:44:07.196538 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/38bf4d77-20b8-4319-a6dc-7334c13dc91f-host\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:07 crc kubenswrapper[4833]: I0219 13:44:07.196593 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhdtd\" (UniqueName: \"kubernetes.io/projected/38bf4d77-20b8-4319-a6dc-7334c13dc91f-kube-api-access-mhdtd\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:07 crc kubenswrapper[4833]: I0219 13:44:07.801565 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33b0919d36f915d609db0a970d8e2aaa04ba75a716d6453faed1195396d551b9" Feb 19 13:44:07 crc kubenswrapper[4833]: I0219 13:44:07.801639 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-v5vdk" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.160429 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wbwwr/crc-debug-nqx7w"] Feb 19 13:44:08 crc kubenswrapper[4833]: E0219 13:44:08.160796 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38bf4d77-20b8-4319-a6dc-7334c13dc91f" containerName="container-00" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.160808 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="38bf4d77-20b8-4319-a6dc-7334c13dc91f" containerName="container-00" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.160978 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="38bf4d77-20b8-4319-a6dc-7334c13dc91f" containerName="container-00" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.161537 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.166851 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-wbwwr"/"default-dockercfg-6qc25" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.222060 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7qj\" (UniqueName: \"kubernetes.io/projected/ce843aa6-219d-47a0-b488-dbd1212d77df-kube-api-access-hw7qj\") pod \"crc-debug-nqx7w\" (UID: \"ce843aa6-219d-47a0-b488-dbd1212d77df\") " pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.222194 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce843aa6-219d-47a0-b488-dbd1212d77df-host\") pod \"crc-debug-nqx7w\" (UID: \"ce843aa6-219d-47a0-b488-dbd1212d77df\") " pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.324561 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce843aa6-219d-47a0-b488-dbd1212d77df-host\") pod \"crc-debug-nqx7w\" (UID: \"ce843aa6-219d-47a0-b488-dbd1212d77df\") " pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.325007 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw7qj\" (UniqueName: \"kubernetes.io/projected/ce843aa6-219d-47a0-b488-dbd1212d77df-kube-api-access-hw7qj\") pod \"crc-debug-nqx7w\" (UID: \"ce843aa6-219d-47a0-b488-dbd1212d77df\") " pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.326735 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce843aa6-219d-47a0-b488-dbd1212d77df-host\") pod \"crc-debug-nqx7w\" (UID: \"ce843aa6-219d-47a0-b488-dbd1212d77df\") " pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.337430 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38bf4d77-20b8-4319-a6dc-7334c13dc91f" path="/var/lib/kubelet/pods/38bf4d77-20b8-4319-a6dc-7334c13dc91f/volumes" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.372988 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw7qj\" (UniqueName: \"kubernetes.io/projected/ce843aa6-219d-47a0-b488-dbd1212d77df-kube-api-access-hw7qj\") pod \"crc-debug-nqx7w\" (UID: \"ce843aa6-219d-47a0-b488-dbd1212d77df\") " pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.477454 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.811843 4833 generic.go:334] "Generic (PLEG): container finished" podID="ce843aa6-219d-47a0-b488-dbd1212d77df" containerID="de5ed367a626c89af0080d3f298b3ed551299925e0916b052c2fdcd8739df32a" exitCode=0 Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.811895 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" event={"ID":"ce843aa6-219d-47a0-b488-dbd1212d77df","Type":"ContainerDied","Data":"de5ed367a626c89af0080d3f298b3ed551299925e0916b052c2fdcd8739df32a"} Feb 19 13:44:08 crc kubenswrapper[4833]: I0219 13:44:08.811925 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" event={"ID":"ce843aa6-219d-47a0-b488-dbd1212d77df","Type":"ContainerStarted","Data":"0d31e25adb1def59850cf7da8515e00d4833d1bd95ac552d5e8d837f1fbff652"} Feb 19 13:44:09 crc kubenswrapper[4833]: I0219 13:44:09.167998 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wbwwr/crc-debug-nqx7w"] Feb 19 13:44:09 crc kubenswrapper[4833]: I0219 13:44:09.179851 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wbwwr/crc-debug-nqx7w"] Feb 19 13:44:09 crc kubenswrapper[4833]: I0219 13:44:09.918018 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.057732 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce843aa6-219d-47a0-b488-dbd1212d77df-host\") pod \"ce843aa6-219d-47a0-b488-dbd1212d77df\" (UID: \"ce843aa6-219d-47a0-b488-dbd1212d77df\") " Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.057810 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw7qj\" (UniqueName: \"kubernetes.io/projected/ce843aa6-219d-47a0-b488-dbd1212d77df-kube-api-access-hw7qj\") pod \"ce843aa6-219d-47a0-b488-dbd1212d77df\" (UID: \"ce843aa6-219d-47a0-b488-dbd1212d77df\") " Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.057842 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce843aa6-219d-47a0-b488-dbd1212d77df-host" (OuterVolumeSpecName: "host") pod "ce843aa6-219d-47a0-b488-dbd1212d77df" (UID: "ce843aa6-219d-47a0-b488-dbd1212d77df"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.058479 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ce843aa6-219d-47a0-b488-dbd1212d77df-host\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.063667 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce843aa6-219d-47a0-b488-dbd1212d77df-kube-api-access-hw7qj" (OuterVolumeSpecName: "kube-api-access-hw7qj") pod "ce843aa6-219d-47a0-b488-dbd1212d77df" (UID: "ce843aa6-219d-47a0-b488-dbd1212d77df"). InnerVolumeSpecName "kube-api-access-hw7qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.160049 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw7qj\" (UniqueName: \"kubernetes.io/projected/ce843aa6-219d-47a0-b488-dbd1212d77df-kube-api-access-hw7qj\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.326146 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce843aa6-219d-47a0-b488-dbd1212d77df" path="/var/lib/kubelet/pods/ce843aa6-219d-47a0-b488-dbd1212d77df/volumes" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.377541 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wbwwr/crc-debug-cgx2q"] Feb 19 13:44:10 crc kubenswrapper[4833]: E0219 13:44:10.378021 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce843aa6-219d-47a0-b488-dbd1212d77df" containerName="container-00" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.378048 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce843aa6-219d-47a0-b488-dbd1212d77df" containerName="container-00" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.378296 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce843aa6-219d-47a0-b488-dbd1212d77df" containerName="container-00" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.379065 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.569624 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-host\") pod \"crc-debug-cgx2q\" (UID: \"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e\") " pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.569719 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ld46\" (UniqueName: \"kubernetes.io/projected/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-kube-api-access-7ld46\") pod \"crc-debug-cgx2q\" (UID: \"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e\") " pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.672351 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-host\") pod \"crc-debug-cgx2q\" (UID: \"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e\") " pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.672428 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ld46\" (UniqueName: \"kubernetes.io/projected/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-kube-api-access-7ld46\") pod \"crc-debug-cgx2q\" (UID: \"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e\") " pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.672478 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-host\") pod \"crc-debug-cgx2q\" (UID: \"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e\") " pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.701808 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ld46\" (UniqueName: \"kubernetes.io/projected/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-kube-api-access-7ld46\") pod \"crc-debug-cgx2q\" (UID: \"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e\") " pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.830094 4833 scope.go:117] "RemoveContainer" containerID="de5ed367a626c89af0080d3f298b3ed551299925e0916b052c2fdcd8739df32a" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.830213 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-nqx7w" Feb 19 13:44:10 crc kubenswrapper[4833]: I0219 13:44:10.998175 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" Feb 19 13:44:11 crc kubenswrapper[4833]: W0219 13:44:11.030676 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf47cbf_7633_4f34_b7a5_c5a2a43fd05e.slice/crio-8ede6ac7ddda4ee891047387061c0e33b564b38433579a162be374b696f1f42f WatchSource:0}: Error finding container 8ede6ac7ddda4ee891047387061c0e33b564b38433579a162be374b696f1f42f: Status 404 returned error can't find the container with id 8ede6ac7ddda4ee891047387061c0e33b564b38433579a162be374b696f1f42f Feb 19 13:44:11 crc kubenswrapper[4833]: I0219 13:44:11.844550 4833 generic.go:334] "Generic (PLEG): container finished" podID="faf47cbf-7633-4f34-b7a5-c5a2a43fd05e" containerID="f684c8e59272a1af4b4b6299fd200e049702a1df7a2ef3a74ed5015dbf9a1ade" exitCode=0 Feb 19 13:44:11 crc kubenswrapper[4833]: I0219 13:44:11.844664 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" event={"ID":"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e","Type":"ContainerDied","Data":"f684c8e59272a1af4b4b6299fd200e049702a1df7a2ef3a74ed5015dbf9a1ade"} Feb 19 13:44:11 crc kubenswrapper[4833]: I0219 13:44:11.844984 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" event={"ID":"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e","Type":"ContainerStarted","Data":"8ede6ac7ddda4ee891047387061c0e33b564b38433579a162be374b696f1f42f"} Feb 19 13:44:11 crc kubenswrapper[4833]: I0219 13:44:11.927076 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wbwwr/crc-debug-cgx2q"] Feb 19 13:44:11 crc kubenswrapper[4833]: I0219 13:44:11.944888 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wbwwr/crc-debug-cgx2q"] Feb 19 13:44:12 crc kubenswrapper[4833]: I0219 13:44:12.987533 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" Feb 19 13:44:13 crc kubenswrapper[4833]: I0219 13:44:13.122312 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ld46\" (UniqueName: \"kubernetes.io/projected/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-kube-api-access-7ld46\") pod \"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e\" (UID: \"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e\") " Feb 19 13:44:13 crc kubenswrapper[4833]: I0219 13:44:13.122380 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-host\") pod \"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e\" (UID: \"faf47cbf-7633-4f34-b7a5-c5a2a43fd05e\") " Feb 19 13:44:13 crc kubenswrapper[4833]: I0219 13:44:13.122600 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-host" (OuterVolumeSpecName: "host") pod "faf47cbf-7633-4f34-b7a5-c5a2a43fd05e" (UID: "faf47cbf-7633-4f34-b7a5-c5a2a43fd05e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:44:13 crc kubenswrapper[4833]: I0219 13:44:13.123194 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-host\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:13 crc kubenswrapper[4833]: I0219 13:44:13.126888 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-kube-api-access-7ld46" (OuterVolumeSpecName: "kube-api-access-7ld46") pod "faf47cbf-7633-4f34-b7a5-c5a2a43fd05e" (UID: "faf47cbf-7633-4f34-b7a5-c5a2a43fd05e"). InnerVolumeSpecName "kube-api-access-7ld46". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:44:13 crc kubenswrapper[4833]: I0219 13:44:13.225249 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ld46\" (UniqueName: \"kubernetes.io/projected/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e-kube-api-access-7ld46\") on node \"crc\" DevicePath \"\"" Feb 19 13:44:13 crc kubenswrapper[4833]: I0219 13:44:13.865463 4833 scope.go:117] "RemoveContainer" containerID="f684c8e59272a1af4b4b6299fd200e049702a1df7a2ef3a74ed5015dbf9a1ade" Feb 19 13:44:13 crc kubenswrapper[4833]: I0219 13:44:13.865528 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/crc-debug-cgx2q" Feb 19 13:44:14 crc kubenswrapper[4833]: I0219 13:44:14.325874 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf47cbf-7633-4f34-b7a5-c5a2a43fd05e" path="/var/lib/kubelet/pods/faf47cbf-7633-4f34-b7a5-c5a2a43fd05e/volumes" Feb 19 13:44:27 crc kubenswrapper[4833]: I0219 13:44:27.512447 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8468b886b8-mz8xd_ad7485d9-4e14-49c1-bf60-8a0146d26df0/barbican-api/0.log" Feb 19 13:44:27 crc kubenswrapper[4833]: I0219 13:44:27.687759 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8468b886b8-mz8xd_ad7485d9-4e14-49c1-bf60-8a0146d26df0/barbican-api-log/0.log" Feb 19 13:44:27 crc kubenswrapper[4833]: I0219 13:44:27.705999 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56b45cfbd8-5dfjr_f30e4d86-a08b-4021-8c83-3fb5abe86152/barbican-keystone-listener/0.log" Feb 19 13:44:27 crc kubenswrapper[4833]: I0219 13:44:27.746269 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56b45cfbd8-5dfjr_f30e4d86-a08b-4021-8c83-3fb5abe86152/barbican-keystone-listener-log/0.log" Feb 19 13:44:27 crc kubenswrapper[4833]: I0219 13:44:27.855921 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-546699bf4c-sqbpl_6595e595-9cbc-44bb-8629-a53da3b75bd6/barbican-worker/0.log" Feb 19 13:44:27 crc kubenswrapper[4833]: I0219 13:44:27.916925 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-546699bf4c-sqbpl_6595e595-9cbc-44bb-8629-a53da3b75bd6/barbican-worker-log/0.log" Feb 19 13:44:28 crc kubenswrapper[4833]: I0219 13:44:28.110190 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp_71fc87a7-2568-481c-a841-6500a69ba8b9/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:28 crc kubenswrapper[4833]: I0219 13:44:28.216549 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8d9232be-2376-4ec9-9f32-16de9f8942d0/ceilometer-central-agent/0.log" Feb 19 13:44:28 crc kubenswrapper[4833]: I0219 13:44:28.227650 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8d9232be-2376-4ec9-9f32-16de9f8942d0/ceilometer-notification-agent/0.log" Feb 19 13:44:28 crc kubenswrapper[4833]: I0219 13:44:28.282751 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8d9232be-2376-4ec9-9f32-16de9f8942d0/proxy-httpd/0.log" Feb 19 13:44:28 crc kubenswrapper[4833]: I0219 13:44:28.320305 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8d9232be-2376-4ec9-9f32-16de9f8942d0/sg-core/0.log" Feb 19 13:44:28 crc kubenswrapper[4833]: I0219 13:44:28.458253 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ffac1cb-2dd7-4ff9-92e1-a41a23411f57/cinder-api/0.log" Feb 19 13:44:28 crc kubenswrapper[4833]: I0219 13:44:28.542876 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ffac1cb-2dd7-4ff9-92e1-a41a23411f57/cinder-api-log/0.log" Feb 19 13:44:28 crc kubenswrapper[4833]: I0219 13:44:28.662804 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_77866722-bd38-4757-b8a0-d2939b40d2ee/cinder-scheduler/0.log" Feb 19 13:44:28 crc kubenswrapper[4833]: I0219 13:44:28.680958 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_77866722-bd38-4757-b8a0-d2939b40d2ee/probe/0.log" Feb 19 13:44:28 crc kubenswrapper[4833]: I0219 13:44:28.768069 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf_d1060705-48ca-43e4-8a72-0fbd655875a6/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:28 crc kubenswrapper[4833]: I0219 13:44:28.890396 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4_46a1f7d2-8e31-4ef8-8508-08be63d0fee2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:29 crc kubenswrapper[4833]: I0219 13:44:29.015950 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-lg95r_e9222a60-0b24-4d91-8002-74747339c9d5/init/0.log" Feb 19 13:44:29 crc kubenswrapper[4833]: I0219 13:44:29.144527 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-lg95r_e9222a60-0b24-4d91-8002-74747339c9d5/init/0.log" Feb 19 13:44:29 crc kubenswrapper[4833]: I0219 13:44:29.189911 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb_ade3dcb5-7a6a-4bef-a706-01dbd2d074a1/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:29 crc kubenswrapper[4833]: I0219 13:44:29.209946 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-lg95r_e9222a60-0b24-4d91-8002-74747339c9d5/dnsmasq-dns/0.log" Feb 19 13:44:29 crc kubenswrapper[4833]: I0219 13:44:29.346894 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_42dcfe39-3d5b-4e0a-8b07-658ec7f665ba/glance-httpd/0.log" Feb 19 13:44:29 crc kubenswrapper[4833]: I0219 13:44:29.408162 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_42dcfe39-3d5b-4e0a-8b07-658ec7f665ba/glance-log/0.log" Feb 19 13:44:29 crc kubenswrapper[4833]: I0219 13:44:29.571453 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0f177d83-63c7-433e-aeb0-e8a91b6216f8/glance-log/0.log" Feb 19 13:44:29 crc kubenswrapper[4833]: I0219 13:44:29.594899 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0f177d83-63c7-433e-aeb0-e8a91b6216f8/glance-httpd/0.log" Feb 19 13:44:29 crc kubenswrapper[4833]: I0219 13:44:29.724152 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b954444d4-2mwt9_88341f77-7fab-4dba-be1d-8e11becd2953/horizon/0.log" Feb 19 13:44:29 crc kubenswrapper[4833]: I0219 13:44:29.903583 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g_1abf51ed-df14-4ea8-a9df-e6ee9810e40e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:30 crc kubenswrapper[4833]: I0219 13:44:30.037823 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b954444d4-2mwt9_88341f77-7fab-4dba-be1d-8e11becd2953/horizon-log/0.log" Feb 19 13:44:30 crc kubenswrapper[4833]: I0219 13:44:30.108328 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vj2ln_56fbaae9-eaee-4f1d-99b6-53bc919ecb4b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:30 crc kubenswrapper[4833]: I0219 13:44:30.313139 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54c7bb578f-26gwx_a880b98a-d4ab-49dd-bc84-ff52c67c5432/keystone-api/0.log" Feb 19 13:44:30 crc kubenswrapper[4833]: I0219 13:44:30.344709 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d7f4a85b-484c-414d-969f-58baa362a1ff/kube-state-metrics/0.log" Feb 19 13:44:30 crc kubenswrapper[4833]: I0219 13:44:30.496049 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv_c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:30 crc kubenswrapper[4833]: I0219 13:44:30.864761 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75865f57f7-4q4h9_cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7/neutron-api/0.log" Feb 19 13:44:30 crc kubenswrapper[4833]: I0219 13:44:30.867796 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75865f57f7-4q4h9_cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7/neutron-httpd/0.log" Feb 19 13:44:31 crc kubenswrapper[4833]: I0219 13:44:31.050680 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx_5e2ba26c-7bab-411e-80f6-bf1e77dce436/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:31 crc kubenswrapper[4833]: I0219 13:44:31.590239 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_70e644c8-55f1-4d68-8cfc-f4a12ed42ec2/nova-api-log/0.log" Feb 19 13:44:31 crc kubenswrapper[4833]: I0219 13:44:31.691782 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5cccac96-51b3-457e-86eb-bd59ce49b7cf/nova-cell0-conductor-conductor/0.log" Feb 19 13:44:31 crc kubenswrapper[4833]: I0219 13:44:31.849125 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_70e644c8-55f1-4d68-8cfc-f4a12ed42ec2/nova-api-api/0.log" Feb 19 13:44:31 crc kubenswrapper[4833]: I0219 13:44:31.859302 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_cff86803-41bf-463e-a5ea-30f70425a39a/nova-cell1-conductor-conductor/0.log" Feb 19 13:44:31 crc kubenswrapper[4833]: I0219 13:44:31.985669 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_77001968-5717-445a-b12e-a1318c720b23/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 13:44:32 crc kubenswrapper[4833]: I0219 13:44:32.112183 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rk95c_cf0f1512-542b-4358-b74b-57df19d9c7d3/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:32 crc kubenswrapper[4833]: I0219 13:44:32.254704 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_59a4389e-efff-4621-bc9d-548f8c2b78f9/nova-metadata-log/0.log" Feb 19 13:44:32 crc kubenswrapper[4833]: I0219 13:44:32.524645 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b99a7fca-b744-4c37-abf6-76f23e90f7da/nova-scheduler-scheduler/0.log" Feb 19 13:44:32 crc kubenswrapper[4833]: I0219 13:44:32.616359 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_866102e5-b1c2-4f33-9c34-312be44faea7/mysql-bootstrap/0.log" Feb 19 13:44:32 crc kubenswrapper[4833]: I0219 13:44:32.829792 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_866102e5-b1c2-4f33-9c34-312be44faea7/mysql-bootstrap/0.log" Feb 19 13:44:32 crc kubenswrapper[4833]: I0219 13:44:32.831829 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_866102e5-b1c2-4f33-9c34-312be44faea7/galera/0.log" Feb 19 13:44:32 crc kubenswrapper[4833]: I0219 13:44:32.987558 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_679ec18d-1d70-4cc5-8103-b28f0809a45e/mysql-bootstrap/0.log" Feb 19 13:44:33 crc kubenswrapper[4833]: I0219 13:44:33.161613 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_679ec18d-1d70-4cc5-8103-b28f0809a45e/mysql-bootstrap/0.log" Feb 19 13:44:33 crc kubenswrapper[4833]: I0219 13:44:33.188852 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_679ec18d-1d70-4cc5-8103-b28f0809a45e/galera/0.log" Feb 19 13:44:33 crc kubenswrapper[4833]: I0219 13:44:33.347847 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_59a4389e-efff-4621-bc9d-548f8c2b78f9/nova-metadata-metadata/0.log" Feb 19 13:44:33 crc kubenswrapper[4833]: I0219 13:44:33.357891 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_df72920d-e022-48f9-b41c-f2fe6ed14da9/openstackclient/0.log" Feb 19 13:44:33 crc kubenswrapper[4833]: I0219 13:44:33.445345 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fbrgv_488bba31-e718-4ef1-bd04-6ed3fe165c89/ovn-controller/0.log" Feb 19 13:44:33 crc kubenswrapper[4833]: I0219 13:44:33.560139 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-d6gmv_ba309e83-ab80-44b0-95a6-01034dfcca68/openstack-network-exporter/0.log" Feb 19 13:44:33 crc kubenswrapper[4833]: I0219 13:44:33.737697 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jlg7_afdbcf60-89d8-426e-9323-1347d5cb238f/ovsdb-server-init/0.log" Feb 19 13:44:33 crc kubenswrapper[4833]: I0219 13:44:33.919758 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jlg7_afdbcf60-89d8-426e-9323-1347d5cb238f/ovsdb-server-init/0.log" Feb 19 13:44:33 crc kubenswrapper[4833]: I0219 13:44:33.929936 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jlg7_afdbcf60-89d8-426e-9323-1347d5cb238f/ovsdb-server/0.log" Feb 19 13:44:33 crc kubenswrapper[4833]: I0219 13:44:33.983622 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jlg7_afdbcf60-89d8-426e-9323-1347d5cb238f/ovs-vswitchd/0.log" Feb 19 13:44:34 crc kubenswrapper[4833]: I0219 13:44:34.170449 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-6hdvq_3952291f-b3f9-4309-ae64-d6cbef7d6607/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:34 crc kubenswrapper[4833]: I0219 13:44:34.171433 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e175eae-03fe-4c4b-b5d2-96df10844449/openstack-network-exporter/0.log" Feb 19 13:44:34 crc kubenswrapper[4833]: I0219 13:44:34.232337 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e175eae-03fe-4c4b-b5d2-96df10844449/ovn-northd/0.log" Feb 19 13:44:34 crc kubenswrapper[4833]: I0219 13:44:34.330779 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_55636a14-4194-419e-be9c-d4f8c4064d77/openstack-network-exporter/0.log" Feb 19 13:44:34 crc kubenswrapper[4833]: I0219 13:44:34.430742 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_55636a14-4194-419e-be9c-d4f8c4064d77/ovsdbserver-nb/0.log" Feb 19 13:44:34 crc kubenswrapper[4833]: I0219 13:44:34.568071 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1c5ea761-d056-4868-af9f-309486208889/openstack-network-exporter/0.log" Feb 19 13:44:34 crc kubenswrapper[4833]: I0219 13:44:34.569718 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1c5ea761-d056-4868-af9f-309486208889/ovsdbserver-sb/0.log" Feb 19 13:44:34 crc kubenswrapper[4833]: I0219 13:44:34.702518 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64f9d5d984-h9kbm_5f9f5174-162c-418a-8f37-09af448d7716/placement-api/0.log" Feb 19 13:44:34 crc kubenswrapper[4833]: I0219 13:44:34.862422 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64f9d5d984-h9kbm_5f9f5174-162c-418a-8f37-09af448d7716/placement-log/0.log" Feb 19 13:44:34 crc kubenswrapper[4833]: I0219 13:44:34.901939 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52361cb4-eea4-49c7-b06b-acbe0ad24450/setup-container/0.log" Feb 19 13:44:35 crc kubenswrapper[4833]: I0219 13:44:35.140132 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_95192227-96aa-4fa8-a7db-89f31efb056c/setup-container/0.log" Feb 19 13:44:35 crc kubenswrapper[4833]: I0219 13:44:35.174823 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52361cb4-eea4-49c7-b06b-acbe0ad24450/rabbitmq/0.log" Feb 19 13:44:35 crc kubenswrapper[4833]: I0219 13:44:35.194979 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52361cb4-eea4-49c7-b06b-acbe0ad24450/setup-container/0.log" Feb 19 13:44:35 crc kubenswrapper[4833]: I0219 13:44:35.406320 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_95192227-96aa-4fa8-a7db-89f31efb056c/setup-container/0.log" Feb 19 13:44:35 crc kubenswrapper[4833]: I0219 13:44:35.450454 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg_80c2b43f-f289-49a9-a544-08316b461536/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:35 crc kubenswrapper[4833]: I0219 13:44:35.466196 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_95192227-96aa-4fa8-a7db-89f31efb056c/rabbitmq/0.log" Feb 19 13:44:35 crc kubenswrapper[4833]: I0219 13:44:35.641229 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xnzhk_810a4b4a-798a-4dbc-9f86-81377c37d104/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:35 crc kubenswrapper[4833]: I0219 13:44:35.683085 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt_0457ceaa-c998-49db-bfa7-f837bf684537/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:35 crc kubenswrapper[4833]: I0219 13:44:35.942517 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8s8fs_290637db-709b-4ce8-a200-76e9bf643d55/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.015025 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-fndfb_f25a8685-c4bf-460a-a553-e26d1ddc9d09/ssh-known-hosts-edpm-deployment/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.184055 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-756fd4958c-8cv9q_c148317b-fc12-4940-8fb0-587c8eff29f9/proxy-server/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.274395 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-756fd4958c-8cv9q_c148317b-fc12-4940-8fb0-587c8eff29f9/proxy-httpd/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.380662 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-p2m8p_46126eda-f691-4339-966c-615190176dea/swift-ring-rebalance/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.488784 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/account-auditor/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.529785 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/account-reaper/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.604309 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/account-server/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.619843 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/account-replicator/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.678209 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/container-auditor/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.765107 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/container-replicator/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.799518 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/container-server/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.857706 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/container-updater/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.886063 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/object-auditor/0.log" Feb 19 13:44:36 crc kubenswrapper[4833]: I0219 13:44:36.976171 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/object-expirer/0.log" Feb 19 13:44:37 crc kubenswrapper[4833]: I0219 13:44:37.007096 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/object-replicator/0.log" Feb 19 13:44:37 crc kubenswrapper[4833]: I0219 13:44:37.084144 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/object-server/0.log" Feb 19 13:44:37 crc kubenswrapper[4833]: I0219 13:44:37.123217 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/object-updater/0.log" Feb 19 13:44:37 crc kubenswrapper[4833]: I0219 13:44:37.206284 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/rsync/0.log" Feb 19 13:44:37 crc kubenswrapper[4833]: I0219 13:44:37.227612 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/swift-recon-cron/0.log" Feb 19 13:44:37 crc kubenswrapper[4833]: I0219 13:44:37.392460 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc_3d0af35d-1268-4a37-a176-e2ca439c6ba6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:37 crc kubenswrapper[4833]: I0219 13:44:37.485516 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_fbca1583-1d12-4e49-bda3-864536093e85/tempest-tests-tempest-tests-runner/0.log" Feb 19 13:44:37 crc kubenswrapper[4833]: I0219 13:44:37.638830 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_398ff1ce-0aa5-4f20-9ff3-e21807d5771c/test-operator-logs-container/0.log" Feb 19 13:44:37 crc kubenswrapper[4833]: I0219 13:44:37.719963 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv_6533780f-2a0a-484f-afa5-ad561486e8a2/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:44:46 crc kubenswrapper[4833]: I0219 13:44:46.292859 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8c04a00b-8613-472f-bf1b-e1d26ed34312/memcached/0.log" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.165696 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p"] Feb 19 13:45:00 crc kubenswrapper[4833]: E0219 13:45:00.166814 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf47cbf-7633-4f34-b7a5-c5a2a43fd05e" containerName="container-00" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.166834 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf47cbf-7633-4f34-b7a5-c5a2a43fd05e" containerName="container-00" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.167100 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf47cbf-7633-4f34-b7a5-c5a2a43fd05e" containerName="container-00" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.168017 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.170640 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.173634 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.178000 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p"] Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.261275 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzhj2\" (UniqueName: \"kubernetes.io/projected/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-kube-api-access-kzhj2\") pod \"collect-profiles-29525145-m9h4p\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.261414 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-secret-volume\") pod \"collect-profiles-29525145-m9h4p\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.261453 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-config-volume\") pod \"collect-profiles-29525145-m9h4p\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.362951 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-secret-volume\") pod \"collect-profiles-29525145-m9h4p\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.363014 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-config-volume\") pod \"collect-profiles-29525145-m9h4p\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.363061 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzhj2\" (UniqueName: \"kubernetes.io/projected/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-kube-api-access-kzhj2\") pod \"collect-profiles-29525145-m9h4p\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.364201 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-config-volume\") pod \"collect-profiles-29525145-m9h4p\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.368889 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-secret-volume\") pod \"collect-profiles-29525145-m9h4p\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.380091 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzhj2\" (UniqueName: \"kubernetes.io/projected/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-kube-api-access-kzhj2\") pod \"collect-profiles-29525145-m9h4p\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:00 crc kubenswrapper[4833]: I0219 13:45:00.489748 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:01 crc kubenswrapper[4833]: I0219 13:45:01.057143 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p"] Feb 19 13:45:01 crc kubenswrapper[4833]: I0219 13:45:01.256382 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" event={"ID":"ba353eac-d0ad-4bd9-a287-e85a290dd5ca","Type":"ContainerStarted","Data":"10dc440c0e87d88dbef1c7454634791b3d5382c77b3f43c5559bb6c7bfd1bb92"} Feb 19 13:45:01 crc kubenswrapper[4833]: I0219 13:45:01.256773 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" event={"ID":"ba353eac-d0ad-4bd9-a287-e85a290dd5ca","Type":"ContainerStarted","Data":"13801ec45d8e3fdaafdd09849e18cfa2dece0ba5b4e5e2adb6ca21b6adf7d47b"} Feb 19 13:45:01 crc kubenswrapper[4833]: I0219 13:45:01.276555 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" podStartSLOduration=1.276531617 podStartE2EDuration="1.276531617s" podCreationTimestamp="2026-02-19 13:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:45:01.272797468 +0000 UTC m=+3511.668316236" watchObservedRunningTime="2026-02-19 13:45:01.276531617 +0000 UTC m=+3511.672050385" Feb 19 13:45:02 crc kubenswrapper[4833]: I0219 13:45:02.280400 4833 generic.go:334] "Generic (PLEG): container finished" podID="ba353eac-d0ad-4bd9-a287-e85a290dd5ca" containerID="10dc440c0e87d88dbef1c7454634791b3d5382c77b3f43c5559bb6c7bfd1bb92" exitCode=0 Feb 19 13:45:02 crc kubenswrapper[4833]: I0219 13:45:02.280793 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" event={"ID":"ba353eac-d0ad-4bd9-a287-e85a290dd5ca","Type":"ContainerDied","Data":"10dc440c0e87d88dbef1c7454634791b3d5382c77b3f43c5559bb6c7bfd1bb92"} Feb 19 13:45:02 crc kubenswrapper[4833]: I0219 13:45:02.908749 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/util/0.log" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.155531 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/util/0.log" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.184155 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/pull/0.log" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.192539 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/pull/0.log" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.365717 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/extract/0.log" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.382341 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/util/0.log" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.389908 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/pull/0.log" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.689367 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.838834 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-config-volume\") pod \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.838933 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-secret-volume\") pod \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.839096 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzhj2\" (UniqueName: \"kubernetes.io/projected/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-kube-api-access-kzhj2\") pod \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\" (UID: \"ba353eac-d0ad-4bd9-a287-e85a290dd5ca\") " Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.841001 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-config-volume" (OuterVolumeSpecName: "config-volume") pod "ba353eac-d0ad-4bd9-a287-e85a290dd5ca" (UID: "ba353eac-d0ad-4bd9-a287-e85a290dd5ca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.851488 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-kube-api-access-kzhj2" (OuterVolumeSpecName: "kube-api-access-kzhj2") pod "ba353eac-d0ad-4bd9-a287-e85a290dd5ca" (UID: "ba353eac-d0ad-4bd9-a287-e85a290dd5ca"). InnerVolumeSpecName "kube-api-access-kzhj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.861196 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ba353eac-d0ad-4bd9-a287-e85a290dd5ca" (UID: "ba353eac-d0ad-4bd9-a287-e85a290dd5ca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.940808 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzhj2\" (UniqueName: \"kubernetes.io/projected/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-kube-api-access-kzhj2\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.940838 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:03 crc kubenswrapper[4833]: I0219 13:45:03.940847 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ba353eac-d0ad-4bd9-a287-e85a290dd5ca-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:04 crc kubenswrapper[4833]: I0219 13:45:04.097222 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-ztqm7_dd33e5e9-4983-4954-966e-a693cc5c299b/manager/0.log" Feb 19 13:45:04 crc kubenswrapper[4833]: I0219 13:45:04.315431 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" Feb 19 13:45:04 crc kubenswrapper[4833]: I0219 13:45:04.328247 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525145-m9h4p" event={"ID":"ba353eac-d0ad-4bd9-a287-e85a290dd5ca","Type":"ContainerDied","Data":"13801ec45d8e3fdaafdd09849e18cfa2dece0ba5b4e5e2adb6ca21b6adf7d47b"} Feb 19 13:45:04 crc kubenswrapper[4833]: I0219 13:45:04.328283 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13801ec45d8e3fdaafdd09849e18cfa2dece0ba5b4e5e2adb6ca21b6adf7d47b" Feb 19 13:45:04 crc kubenswrapper[4833]: I0219 13:45:04.357640 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb"] Feb 19 13:45:04 crc kubenswrapper[4833]: I0219 13:45:04.368320 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525100-99zqb"] Feb 19 13:45:04 crc kubenswrapper[4833]: I0219 13:45:04.470373 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-hq98q_c960bafe-e1ce-4635-a849-758a84db3b0e/manager/0.log" Feb 19 13:45:04 crc kubenswrapper[4833]: I0219 13:45:04.613292 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-wthd9_70feab77-0665-499a-b6e2-b35b95384ab7/manager/0.log" Feb 19 13:45:04 crc kubenswrapper[4833]: I0219 13:45:04.874492 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-ng9mx_0168dd3a-5296-440d-8b46-d858da1cfeb6/manager/0.log" Feb 19 13:45:05 crc kubenswrapper[4833]: I0219 13:45:05.070300 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-hn9cd_1ba8dd89-0865-4766-b216-b906d4d6f77a/manager/0.log" Feb 19 13:45:05 crc kubenswrapper[4833]: I0219 13:45:05.270198 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-7d2vx_eed2b359-6b1f-4db4-947a-6ed3bf4385cc/manager/0.log" Feb 19 13:45:05 crc kubenswrapper[4833]: I0219 13:45:05.410333 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-cvzzp_ab70788d-b168-497b-bea0-4847ee80ce73/manager/0.log" Feb 19 13:45:05 crc kubenswrapper[4833]: I0219 13:45:05.513814 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-8wsws_a09fe0a0-c328-4306-b1de-c8bddc00378f/manager/0.log" Feb 19 13:45:05 crc kubenswrapper[4833]: I0219 13:45:05.629982 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-njc9d_a14096df-2211-4053-afb4-ad8d68ff0723/manager/0.log" Feb 19 13:45:05 crc kubenswrapper[4833]: I0219 13:45:05.750222 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-7n7vf_e6de77c2-2965-48a3-a79a-75539ca32b8b/manager/0.log" Feb 19 13:45:05 crc kubenswrapper[4833]: I0219 13:45:05.959922 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-hprlt_eaf010f7-5113-4970-b963-682d17243fc9/manager/0.log" Feb 19 13:45:06 crc kubenswrapper[4833]: I0219 13:45:06.177782 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-w4n4c_8df5aecb-140d-4845-b07c-ab75586e4b54/manager/0.log" Feb 19 13:45:06 crc kubenswrapper[4833]: I0219 13:45:06.334524 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeddd17e-bcb2-4887-a818-e4617fc9599a" path="/var/lib/kubelet/pods/eeddd17e-bcb2-4887-a818-e4617fc9599a/volumes" Feb 19 13:45:06 crc kubenswrapper[4833]: I0219 13:45:06.466127 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz_a8783b50-8a5e-4c9f-8f4b-513e4e0c7122/manager/0.log" Feb 19 13:45:06 crc kubenswrapper[4833]: I0219 13:45:06.982182 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-568f98c69-t2vv5_59087e2d-5038-44cd-ab4d-1d1340e51c75/operator/0.log" Feb 19 13:45:07 crc kubenswrapper[4833]: I0219 13:45:07.174380 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8gfsk_8f9a2baf-c1a1-48a6-baa9-e73ad2dcac6e/registry-server/0.log" Feb 19 13:45:07 crc kubenswrapper[4833]: I0219 13:45:07.448322 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-bbl2n_eddad8e9-ebc8-4772-9b30-76fc7bd09919/manager/0.log" Feb 19 13:45:07 crc kubenswrapper[4833]: I0219 13:45:07.576066 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-7z4m9_4deeacce-2501-4276-98cf-cb615e0b4dce/manager/0.log" Feb 19 13:45:07 crc kubenswrapper[4833]: I0219 13:45:07.645026 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-hrnmv_3a2db5f5-bbec-4673-b32b-eef31c488a12/manager/0.log" Feb 19 13:45:07 crc kubenswrapper[4833]: I0219 13:45:07.759258 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zwzkq_1e12420e-fd8b-4ef2-bc12-9b3be0efa58a/operator/0.log" Feb 19 13:45:07 crc kubenswrapper[4833]: I0219 13:45:07.933436 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-cn5hb_636db3e6-7c84-4f25-896e-e3a542bdff19/manager/0.log" Feb 19 13:45:08 crc kubenswrapper[4833]: I0219 13:45:08.116983 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-lw9pq_e0e6cafc-957b-4ebd-ad08-1bef03debe49/manager/0.log" Feb 19 13:45:08 crc kubenswrapper[4833]: I0219 13:45:08.134878 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-cn6cv_84b0c5a7-e111-4ee9-999b-5da00d00ffd0/manager/0.log" Feb 19 13:45:08 crc kubenswrapper[4833]: I0219 13:45:08.252312 4833 scope.go:117] "RemoveContainer" containerID="d05530159aa02b1161112011c7d1bbd785fcfcd80ca8ddaa06f8d02ca065a401" Feb 19 13:45:08 crc kubenswrapper[4833]: I0219 13:45:08.378806 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-24cxm_d7b1ebb3-ea0b-4e2f-b27a-e77abee17693/manager/0.log" Feb 19 13:45:08 crc kubenswrapper[4833]: I0219 13:45:08.669835 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-744c6f7bcc-jsmlm_81d2c5dc-91fd-4135-8408-104fc7badb60/manager/0.log" Feb 19 13:45:10 crc kubenswrapper[4833]: I0219 13:45:10.595802 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-g59gc_4b94f9da-5e45-4428-a709-24574552d77e/manager/0.log" Feb 19 13:45:15 crc kubenswrapper[4833]: I0219 13:45:15.744583 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:45:15 crc kubenswrapper[4833]: I0219 13:45:15.745042 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.643360 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fg5dv"] Feb 19 13:45:17 crc kubenswrapper[4833]: E0219 13:45:17.644170 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba353eac-d0ad-4bd9-a287-e85a290dd5ca" containerName="collect-profiles" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.644183 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba353eac-d0ad-4bd9-a287-e85a290dd5ca" containerName="collect-profiles" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.644405 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba353eac-d0ad-4bd9-a287-e85a290dd5ca" containerName="collect-profiles" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.645888 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.655093 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fg5dv"] Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.775090 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-catalog-content\") pod \"redhat-operators-fg5dv\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.775174 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-utilities\") pod \"redhat-operators-fg5dv\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.775309 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dvv8\" (UniqueName: \"kubernetes.io/projected/e3045198-fd27-4a56-bced-66cdddc4593a-kube-api-access-2dvv8\") pod \"redhat-operators-fg5dv\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.877518 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dvv8\" (UniqueName: \"kubernetes.io/projected/e3045198-fd27-4a56-bced-66cdddc4593a-kube-api-access-2dvv8\") pod \"redhat-operators-fg5dv\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.877602 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-catalog-content\") pod \"redhat-operators-fg5dv\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.877676 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-utilities\") pod \"redhat-operators-fg5dv\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.878169 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-utilities\") pod \"redhat-operators-fg5dv\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.878707 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-catalog-content\") pod \"redhat-operators-fg5dv\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.899032 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dvv8\" (UniqueName: \"kubernetes.io/projected/e3045198-fd27-4a56-bced-66cdddc4593a-kube-api-access-2dvv8\") pod \"redhat-operators-fg5dv\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:17 crc kubenswrapper[4833]: I0219 13:45:17.975138 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:18 crc kubenswrapper[4833]: I0219 13:45:18.509007 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fg5dv"] Feb 19 13:45:19 crc kubenswrapper[4833]: I0219 13:45:19.457596 4833 generic.go:334] "Generic (PLEG): container finished" podID="e3045198-fd27-4a56-bced-66cdddc4593a" containerID="c6cd5f43421bf00a9266be6001b3a237bee8820020b0bdf1614483c27420a73e" exitCode=0 Feb 19 13:45:19 crc kubenswrapper[4833]: I0219 13:45:19.457675 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg5dv" event={"ID":"e3045198-fd27-4a56-bced-66cdddc4593a","Type":"ContainerDied","Data":"c6cd5f43421bf00a9266be6001b3a237bee8820020b0bdf1614483c27420a73e"} Feb 19 13:45:19 crc kubenswrapper[4833]: I0219 13:45:19.458109 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg5dv" event={"ID":"e3045198-fd27-4a56-bced-66cdddc4593a","Type":"ContainerStarted","Data":"d922b49577c4afbf3aeec10f307395be718339894bb79a8acc90105c50a2293f"} Feb 19 13:45:20 crc kubenswrapper[4833]: I0219 13:45:20.468598 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg5dv" event={"ID":"e3045198-fd27-4a56-bced-66cdddc4593a","Type":"ContainerStarted","Data":"195fd5047dde90f598fcf5cc0b03cf7d7183bc266288309d367d96fefefd8b3f"} Feb 19 13:45:21 crc kubenswrapper[4833]: I0219 13:45:21.481740 4833 generic.go:334] "Generic (PLEG): container finished" podID="e3045198-fd27-4a56-bced-66cdddc4593a" containerID="195fd5047dde90f598fcf5cc0b03cf7d7183bc266288309d367d96fefefd8b3f" exitCode=0 Feb 19 13:45:21 crc kubenswrapper[4833]: I0219 13:45:21.481794 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg5dv" event={"ID":"e3045198-fd27-4a56-bced-66cdddc4593a","Type":"ContainerDied","Data":"195fd5047dde90f598fcf5cc0b03cf7d7183bc266288309d367d96fefefd8b3f"} Feb 19 13:45:22 crc kubenswrapper[4833]: I0219 13:45:22.493448 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg5dv" event={"ID":"e3045198-fd27-4a56-bced-66cdddc4593a","Type":"ContainerStarted","Data":"94194bd4984d757e3d77791dd3a560fd1480fdb997bc06cd7cd59af10ec894b8"} Feb 19 13:45:22 crc kubenswrapper[4833]: I0219 13:45:22.512219 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fg5dv" podStartSLOduration=2.9432157009999997 podStartE2EDuration="5.51219925s" podCreationTimestamp="2026-02-19 13:45:17 +0000 UTC" firstStartedPulling="2026-02-19 13:45:19.45921366 +0000 UTC m=+3529.854732428" lastFinishedPulling="2026-02-19 13:45:22.028197209 +0000 UTC m=+3532.423715977" observedRunningTime="2026-02-19 13:45:22.50957088 +0000 UTC m=+3532.905089658" watchObservedRunningTime="2026-02-19 13:45:22.51219925 +0000 UTC m=+3532.907718028" Feb 19 13:45:27 crc kubenswrapper[4833]: I0219 13:45:27.976020 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:27 crc kubenswrapper[4833]: I0219 13:45:27.976684 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:29 crc kubenswrapper[4833]: I0219 13:45:29.039294 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fg5dv" podUID="e3045198-fd27-4a56-bced-66cdddc4593a" containerName="registry-server" probeResult="failure" output=< Feb 19 13:45:29 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 19 13:45:29 crc kubenswrapper[4833]: > Feb 19 13:45:29 crc kubenswrapper[4833]: I0219 13:45:29.244367 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4gj46_ea6cc7f7-b2fa-40d4-93cd-795a01861ecb/control-plane-machine-set-operator/0.log" Feb 19 13:45:29 crc kubenswrapper[4833]: I0219 13:45:29.479527 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lmrs2_8a863328-15b8-46bc-9ffd-faa97add46ea/kube-rbac-proxy/0.log" Feb 19 13:45:29 crc kubenswrapper[4833]: I0219 13:45:29.573939 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lmrs2_8a863328-15b8-46bc-9ffd-faa97add46ea/machine-api-operator/0.log" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.010973 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cxcxf"] Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.013947 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.026168 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxcxf"] Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.180685 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-catalog-content\") pod \"certified-operators-cxcxf\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.180817 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-utilities\") pod \"certified-operators-cxcxf\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.180886 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pv7g\" (UniqueName: \"kubernetes.io/projected/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-kube-api-access-8pv7g\") pod \"certified-operators-cxcxf\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.282627 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pv7g\" (UniqueName: \"kubernetes.io/projected/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-kube-api-access-8pv7g\") pod \"certified-operators-cxcxf\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.282980 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-catalog-content\") pod \"certified-operators-cxcxf\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.283050 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-utilities\") pod \"certified-operators-cxcxf\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.283429 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-catalog-content\") pod \"certified-operators-cxcxf\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.283443 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-utilities\") pod \"certified-operators-cxcxf\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.312383 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pv7g\" (UniqueName: \"kubernetes.io/projected/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-kube-api-access-8pv7g\") pod \"certified-operators-cxcxf\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.340776 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:34 crc kubenswrapper[4833]: I0219 13:45:34.864852 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cxcxf"] Feb 19 13:45:35 crc kubenswrapper[4833]: I0219 13:45:35.614389 4833 generic.go:334] "Generic (PLEG): container finished" podID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" containerID="c80d209acc7056fcfc22a36d4facae4462f9df9e25b592bcfd43ca15ece59d5d" exitCode=0 Feb 19 13:45:35 crc kubenswrapper[4833]: I0219 13:45:35.614559 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxcxf" event={"ID":"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff","Type":"ContainerDied","Data":"c80d209acc7056fcfc22a36d4facae4462f9df9e25b592bcfd43ca15ece59d5d"} Feb 19 13:45:35 crc kubenswrapper[4833]: I0219 13:45:35.615023 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxcxf" event={"ID":"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff","Type":"ContainerStarted","Data":"cf908ff101fb635aa932ff1d6059264083c4674555aeaeafa513fa23390b8d27"} Feb 19 13:45:36 crc kubenswrapper[4833]: I0219 13:45:36.623660 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxcxf" event={"ID":"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff","Type":"ContainerStarted","Data":"8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7"} Feb 19 13:45:37 crc kubenswrapper[4833]: I0219 13:45:37.633613 4833 generic.go:334] "Generic (PLEG): container finished" podID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" containerID="8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7" exitCode=0 Feb 19 13:45:37 crc kubenswrapper[4833]: I0219 13:45:37.633715 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxcxf" event={"ID":"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff","Type":"ContainerDied","Data":"8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7"} Feb 19 13:45:38 crc kubenswrapper[4833]: I0219 13:45:38.047592 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:38 crc kubenswrapper[4833]: I0219 13:45:38.132971 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:38 crc kubenswrapper[4833]: I0219 13:45:38.644563 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxcxf" event={"ID":"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff","Type":"ContainerStarted","Data":"7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9"} Feb 19 13:45:38 crc kubenswrapper[4833]: I0219 13:45:38.665761 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cxcxf" podStartSLOduration=3.167567216 podStartE2EDuration="5.66573744s" podCreationTimestamp="2026-02-19 13:45:33 +0000 UTC" firstStartedPulling="2026-02-19 13:45:35.616800748 +0000 UTC m=+3546.012319516" lastFinishedPulling="2026-02-19 13:45:38.114970972 +0000 UTC m=+3548.510489740" observedRunningTime="2026-02-19 13:45:38.663133161 +0000 UTC m=+3549.058651929" watchObservedRunningTime="2026-02-19 13:45:38.66573744 +0000 UTC m=+3549.061256228" Feb 19 13:45:40 crc kubenswrapper[4833]: I0219 13:45:40.406411 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fg5dv"] Feb 19 13:45:40 crc kubenswrapper[4833]: I0219 13:45:40.407171 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fg5dv" podUID="e3045198-fd27-4a56-bced-66cdddc4593a" containerName="registry-server" containerID="cri-o://94194bd4984d757e3d77791dd3a560fd1480fdb997bc06cd7cd59af10ec894b8" gracePeriod=2 Feb 19 13:45:40 crc kubenswrapper[4833]: I0219 13:45:40.670399 4833 generic.go:334] "Generic (PLEG): container finished" podID="e3045198-fd27-4a56-bced-66cdddc4593a" containerID="94194bd4984d757e3d77791dd3a560fd1480fdb997bc06cd7cd59af10ec894b8" exitCode=0 Feb 19 13:45:40 crc kubenswrapper[4833]: I0219 13:45:40.670451 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg5dv" event={"ID":"e3045198-fd27-4a56-bced-66cdddc4593a","Type":"ContainerDied","Data":"94194bd4984d757e3d77791dd3a560fd1480fdb997bc06cd7cd59af10ec894b8"} Feb 19 13:45:40 crc kubenswrapper[4833]: I0219 13:45:40.930066 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.009746 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-catalog-content\") pod \"e3045198-fd27-4a56-bced-66cdddc4593a\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.009905 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-utilities\") pod \"e3045198-fd27-4a56-bced-66cdddc4593a\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.009958 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dvv8\" (UniqueName: \"kubernetes.io/projected/e3045198-fd27-4a56-bced-66cdddc4593a-kube-api-access-2dvv8\") pod \"e3045198-fd27-4a56-bced-66cdddc4593a\" (UID: \"e3045198-fd27-4a56-bced-66cdddc4593a\") " Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.010265 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-utilities" (OuterVolumeSpecName: "utilities") pod "e3045198-fd27-4a56-bced-66cdddc4593a" (UID: "e3045198-fd27-4a56-bced-66cdddc4593a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.010423 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.021595 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3045198-fd27-4a56-bced-66cdddc4593a-kube-api-access-2dvv8" (OuterVolumeSpecName: "kube-api-access-2dvv8") pod "e3045198-fd27-4a56-bced-66cdddc4593a" (UID: "e3045198-fd27-4a56-bced-66cdddc4593a"). InnerVolumeSpecName "kube-api-access-2dvv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.112624 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dvv8\" (UniqueName: \"kubernetes.io/projected/e3045198-fd27-4a56-bced-66cdddc4593a-kube-api-access-2dvv8\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.146903 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e3045198-fd27-4a56-bced-66cdddc4593a" (UID: "e3045198-fd27-4a56-bced-66cdddc4593a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.214195 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e3045198-fd27-4a56-bced-66cdddc4593a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.680371 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fg5dv" event={"ID":"e3045198-fd27-4a56-bced-66cdddc4593a","Type":"ContainerDied","Data":"d922b49577c4afbf3aeec10f307395be718339894bb79a8acc90105c50a2293f"} Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.680425 4833 scope.go:117] "RemoveContainer" containerID="94194bd4984d757e3d77791dd3a560fd1480fdb997bc06cd7cd59af10ec894b8" Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.680577 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fg5dv" Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.724324 4833 scope.go:117] "RemoveContainer" containerID="195fd5047dde90f598fcf5cc0b03cf7d7183bc266288309d367d96fefefd8b3f" Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.730415 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fg5dv"] Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.740725 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fg5dv"] Feb 19 13:45:41 crc kubenswrapper[4833]: I0219 13:45:41.748689 4833 scope.go:117] "RemoveContainer" containerID="c6cd5f43421bf00a9266be6001b3a237bee8820020b0bdf1614483c27420a73e" Feb 19 13:45:42 crc kubenswrapper[4833]: I0219 13:45:42.323971 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3045198-fd27-4a56-bced-66cdddc4593a" path="/var/lib/kubelet/pods/e3045198-fd27-4a56-bced-66cdddc4593a/volumes" Feb 19 13:45:44 crc kubenswrapper[4833]: I0219 13:45:44.342780 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:44 crc kubenswrapper[4833]: I0219 13:45:44.343124 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:44 crc kubenswrapper[4833]: I0219 13:45:44.407380 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:44 crc kubenswrapper[4833]: I0219 13:45:44.758144 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:44 crc kubenswrapper[4833]: I0219 13:45:44.770243 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2zxnh_0ec64263-cb6a-407b-9ef7-a06af9f1df98/cert-manager-controller/0.log" Feb 19 13:45:44 crc kubenswrapper[4833]: I0219 13:45:44.908359 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rpjfw_96ec41c8-cde8-48b8-99ac-9b56a2e86761/cert-manager-cainjector/0.log" Feb 19 13:45:44 crc kubenswrapper[4833]: I0219 13:45:44.955121 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-t7gnv_bba14386-87d4-4397-9ba7-beaafe4c15de/cert-manager-webhook/0.log" Feb 19 13:45:45 crc kubenswrapper[4833]: I0219 13:45:45.396565 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxcxf"] Feb 19 13:45:45 crc kubenswrapper[4833]: I0219 13:45:45.744112 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:45:45 crc kubenswrapper[4833]: I0219 13:45:45.744177 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:45:46 crc kubenswrapper[4833]: I0219 13:45:46.725752 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cxcxf" podUID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" containerName="registry-server" containerID="cri-o://7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9" gracePeriod=2 Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.190933 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.225894 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pv7g\" (UniqueName: \"kubernetes.io/projected/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-kube-api-access-8pv7g\") pod \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.226022 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-catalog-content\") pod \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.226141 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-utilities\") pod \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\" (UID: \"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff\") " Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.227443 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-utilities" (OuterVolumeSpecName: "utilities") pod "fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" (UID: "fbca41d8-40d9-4706-9a16-7d2e3f54c0ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.287167 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" (UID: "fbca41d8-40d9-4706-9a16-7d2e3f54c0ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.328712 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.328761 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.596573 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-kube-api-access-8pv7g" (OuterVolumeSpecName: "kube-api-access-8pv7g") pod "fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" (UID: "fbca41d8-40d9-4706-9a16-7d2e3f54c0ff"). InnerVolumeSpecName "kube-api-access-8pv7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.634372 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pv7g\" (UniqueName: \"kubernetes.io/projected/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff-kube-api-access-8pv7g\") on node \"crc\" DevicePath \"\"" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.739960 4833 generic.go:334] "Generic (PLEG): container finished" podID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" containerID="7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9" exitCode=0 Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.740025 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxcxf" event={"ID":"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff","Type":"ContainerDied","Data":"7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9"} Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.740069 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cxcxf" event={"ID":"fbca41d8-40d9-4706-9a16-7d2e3f54c0ff","Type":"ContainerDied","Data":"cf908ff101fb635aa932ff1d6059264083c4674555aeaeafa513fa23390b8d27"} Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.740075 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cxcxf" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.740091 4833 scope.go:117] "RemoveContainer" containerID="7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.768064 4833 scope.go:117] "RemoveContainer" containerID="8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.801688 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cxcxf"] Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.805862 4833 scope.go:117] "RemoveContainer" containerID="c80d209acc7056fcfc22a36d4facae4462f9df9e25b592bcfd43ca15ece59d5d" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.809035 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cxcxf"] Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.865035 4833 scope.go:117] "RemoveContainer" containerID="7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9" Feb 19 13:45:47 crc kubenswrapper[4833]: E0219 13:45:47.865593 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9\": container with ID starting with 7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9 not found: ID does not exist" containerID="7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.865647 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9"} err="failed to get container status \"7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9\": rpc error: code = NotFound desc = could not find container \"7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9\": container with ID starting with 7d815724951c88c4b3aeeb9be7b3e4f86de8a321774049c27fa7699a4fff49c9 not found: ID does not exist" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.865679 4833 scope.go:117] "RemoveContainer" containerID="8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7" Feb 19 13:45:47 crc kubenswrapper[4833]: E0219 13:45:47.866120 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7\": container with ID starting with 8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7 not found: ID does not exist" containerID="8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.866151 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7"} err="failed to get container status \"8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7\": rpc error: code = NotFound desc = could not find container \"8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7\": container with ID starting with 8ce0edfa4ef0bf7a996e9a7b8c758ef4ba2edd0cbd56d70a9fe4911fca2761c7 not found: ID does not exist" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.866166 4833 scope.go:117] "RemoveContainer" containerID="c80d209acc7056fcfc22a36d4facae4462f9df9e25b592bcfd43ca15ece59d5d" Feb 19 13:45:47 crc kubenswrapper[4833]: E0219 13:45:47.874939 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c80d209acc7056fcfc22a36d4facae4462f9df9e25b592bcfd43ca15ece59d5d\": container with ID starting with c80d209acc7056fcfc22a36d4facae4462f9df9e25b592bcfd43ca15ece59d5d not found: ID does not exist" containerID="c80d209acc7056fcfc22a36d4facae4462f9df9e25b592bcfd43ca15ece59d5d" Feb 19 13:45:47 crc kubenswrapper[4833]: I0219 13:45:47.874996 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80d209acc7056fcfc22a36d4facae4462f9df9e25b592bcfd43ca15ece59d5d"} err="failed to get container status \"c80d209acc7056fcfc22a36d4facae4462f9df9e25b592bcfd43ca15ece59d5d\": rpc error: code = NotFound desc = could not find container \"c80d209acc7056fcfc22a36d4facae4462f9df9e25b592bcfd43ca15ece59d5d\": container with ID starting with c80d209acc7056fcfc22a36d4facae4462f9df9e25b592bcfd43ca15ece59d5d not found: ID does not exist" Feb 19 13:45:48 crc kubenswrapper[4833]: I0219 13:45:48.329814 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" path="/var/lib/kubelet/pods/fbca41d8-40d9-4706-9a16-7d2e3f54c0ff/volumes" Feb 19 13:45:59 crc kubenswrapper[4833]: I0219 13:45:59.324202 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-dd2ct_61daa4d0-c750-45c0-83b1-99ec44ba8842/nmstate-console-plugin/0.log" Feb 19 13:45:59 crc kubenswrapper[4833]: I0219 13:45:59.562452 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-pb67x_d5e6d19d-fb8f-4313-bd78-d5f82fa79e40/nmstate-handler/0.log" Feb 19 13:45:59 crc kubenswrapper[4833]: I0219 13:45:59.598439 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4szmh_ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed/kube-rbac-proxy/0.log" Feb 19 13:45:59 crc kubenswrapper[4833]: I0219 13:45:59.664589 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4szmh_ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed/nmstate-metrics/0.log" Feb 19 13:45:59 crc kubenswrapper[4833]: I0219 13:45:59.781641 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-sqwkx_f99e9621-3d59-431a-874e-0ecb2370cda1/nmstate-operator/0.log" Feb 19 13:45:59 crc kubenswrapper[4833]: I0219 13:45:59.894035 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-x2ld2_336bca49-9b02-4814-a710-f133cc1d3e46/nmstate-webhook/0.log" Feb 19 13:46:09 crc kubenswrapper[4833]: I0219 13:46:09.437725 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-756fd4958c-8cv9q" podUID="c148317b-fc12-4940-8fb0-587c8eff29f9" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 19 13:46:15 crc kubenswrapper[4833]: I0219 13:46:15.744708 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:46:15 crc kubenswrapper[4833]: I0219 13:46:15.746460 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:46:15 crc kubenswrapper[4833]: I0219 13:46:15.746644 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 13:46:15 crc kubenswrapper[4833]: I0219 13:46:15.747583 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dadbbd04deb7bfa91640633352760633123f4c60efb49cdd6b578eff95c9ffcc"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:46:15 crc kubenswrapper[4833]: I0219 13:46:15.747766 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://dadbbd04deb7bfa91640633352760633123f4c60efb49cdd6b578eff95c9ffcc" gracePeriod=600 Feb 19 13:46:16 crc kubenswrapper[4833]: I0219 13:46:16.010549 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="dadbbd04deb7bfa91640633352760633123f4c60efb49cdd6b578eff95c9ffcc" exitCode=0 Feb 19 13:46:16 crc kubenswrapper[4833]: I0219 13:46:16.010595 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"dadbbd04deb7bfa91640633352760633123f4c60efb49cdd6b578eff95c9ffcc"} Feb 19 13:46:16 crc kubenswrapper[4833]: I0219 13:46:16.010895 4833 scope.go:117] "RemoveContainer" containerID="2b2c9ce8fca5d7b68e625bbda1e92a6813d290bf01130f898c68091491f1d19a" Feb 19 13:46:17 crc kubenswrapper[4833]: I0219 13:46:17.024170 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509"} Feb 19 13:46:28 crc kubenswrapper[4833]: I0219 13:46:28.192448 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7hv4q_810d0dc6-4fd1-4c62-838b-f759e361ea26/kube-rbac-proxy/0.log" Feb 19 13:46:28 crc kubenswrapper[4833]: I0219 13:46:28.218168 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7hv4q_810d0dc6-4fd1-4c62-838b-f759e361ea26/controller/0.log" Feb 19 13:46:28 crc kubenswrapper[4833]: I0219 13:46:28.402588 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-frr-files/0.log" Feb 19 13:46:28 crc kubenswrapper[4833]: I0219 13:46:28.549358 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-frr-files/0.log" Feb 19 13:46:28 crc kubenswrapper[4833]: I0219 13:46:28.590619 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-reloader/0.log" Feb 19 13:46:28 crc kubenswrapper[4833]: I0219 13:46:28.611131 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-metrics/0.log" Feb 19 13:46:28 crc kubenswrapper[4833]: I0219 13:46:28.611278 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-reloader/0.log" Feb 19 13:46:28 crc kubenswrapper[4833]: I0219 13:46:28.799688 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-reloader/0.log" Feb 19 13:46:28 crc kubenswrapper[4833]: I0219 13:46:28.863076 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-metrics/0.log" Feb 19 13:46:28 crc kubenswrapper[4833]: I0219 13:46:28.871930 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-frr-files/0.log" Feb 19 13:46:28 crc kubenswrapper[4833]: I0219 13:46:28.872865 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-metrics/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.015892 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-frr-files/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.026114 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-reloader/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.042785 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-metrics/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.053465 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/controller/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.170062 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/frr-metrics/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.233846 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/kube-rbac-proxy-frr/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.247985 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/kube-rbac-proxy/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.370915 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/reloader/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.502537 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-69h4r_62869774-530a-477d-bac0-df5e4fba9daa/frr-k8s-webhook-server/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.756366 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-595bc44cf4-flp9d_84aafe4e-e69c-4cdb-8987-71eb568e3c6b/manager/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.849035 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-55ff9b8c6-prv8k_89d5e852-a20e-4eb4-a37a-6ecdbaf05484/webhook-server/0.log" Feb 19 13:46:29 crc kubenswrapper[4833]: I0219 13:46:29.977633 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9tx4p_e5afb617-4d1b-4c96-a669-f669e870501f/kube-rbac-proxy/0.log" Feb 19 13:46:30 crc kubenswrapper[4833]: I0219 13:46:30.456731 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9tx4p_e5afb617-4d1b-4c96-a669-f669e870501f/speaker/0.log" Feb 19 13:46:30 crc kubenswrapper[4833]: I0219 13:46:30.570086 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/frr/0.log" Feb 19 13:46:43 crc kubenswrapper[4833]: I0219 13:46:43.373437 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/util/0.log" Feb 19 13:46:43 crc kubenswrapper[4833]: I0219 13:46:43.724815 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/util/0.log" Feb 19 13:46:43 crc kubenswrapper[4833]: I0219 13:46:43.750759 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/pull/0.log" Feb 19 13:46:43 crc kubenswrapper[4833]: I0219 13:46:43.757966 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/pull/0.log" Feb 19 13:46:43 crc kubenswrapper[4833]: I0219 13:46:43.916672 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/extract/0.log" Feb 19 13:46:43 crc kubenswrapper[4833]: I0219 13:46:43.926053 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/util/0.log" Feb 19 13:46:43 crc kubenswrapper[4833]: I0219 13:46:43.930337 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/pull/0.log" Feb 19 13:46:44 crc kubenswrapper[4833]: I0219 13:46:44.057129 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-utilities/0.log" Feb 19 13:46:44 crc kubenswrapper[4833]: I0219 13:46:44.247513 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-utilities/0.log" Feb 19 13:46:44 crc kubenswrapper[4833]: I0219 13:46:44.276623 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-content/0.log" Feb 19 13:46:44 crc kubenswrapper[4833]: I0219 13:46:44.298083 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-content/0.log" Feb 19 13:46:44 crc kubenswrapper[4833]: I0219 13:46:44.484717 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-content/0.log" Feb 19 13:46:44 crc kubenswrapper[4833]: I0219 13:46:44.496897 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-utilities/0.log" Feb 19 13:46:44 crc kubenswrapper[4833]: I0219 13:46:44.670963 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-utilities/0.log" Feb 19 13:46:44 crc kubenswrapper[4833]: I0219 13:46:44.868837 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-utilities/0.log" Feb 19 13:46:44 crc kubenswrapper[4833]: I0219 13:46:44.907842 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-content/0.log" Feb 19 13:46:44 crc kubenswrapper[4833]: I0219 13:46:44.923691 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-content/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.127031 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/registry-server/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.139004 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-utilities/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.169574 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-content/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.364239 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/util/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.558106 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/pull/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.603052 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/util/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.607856 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/pull/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.762306 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/util/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.817662 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/extract/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.817892 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/pull/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.850895 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/registry-server/0.log" Feb 19 13:46:45 crc kubenswrapper[4833]: I0219 13:46:45.984921 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-c78pj_29007976-47bd-4251-8d1e-043d4c87270d/marketplace-operator/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.072363 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-utilities/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.208747 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-utilities/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.254366 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-content/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.287238 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-content/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.418733 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-content/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.466123 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-utilities/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.571131 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/registry-server/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.627832 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-utilities/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.808229 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-utilities/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.815327 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-content/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.821834 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-content/0.log" Feb 19 13:46:46 crc kubenswrapper[4833]: I0219 13:46:46.987363 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-content/0.log" Feb 19 13:46:47 crc kubenswrapper[4833]: I0219 13:46:47.003820 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-utilities/0.log" Feb 19 13:46:47 crc kubenswrapper[4833]: I0219 13:46:47.468959 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/registry-server/0.log" Feb 19 13:47:15 crc kubenswrapper[4833]: E0219 13:47:15.525050 4833 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.222:42540->38.102.83.222:38639: write tcp 38.102.83.222:42540->38.102.83.222:38639: write: broken pipe Feb 19 13:48:30 crc kubenswrapper[4833]: I0219 13:48:30.392523 4833 generic.go:334] "Generic (PLEG): container finished" podID="fb08c1f7-3c4c-4589-94be-936811a6f919" containerID="2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51" exitCode=0 Feb 19 13:48:30 crc kubenswrapper[4833]: I0219 13:48:30.392762 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wbwwr/must-gather-j95pl" event={"ID":"fb08c1f7-3c4c-4589-94be-936811a6f919","Type":"ContainerDied","Data":"2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51"} Feb 19 13:48:30 crc kubenswrapper[4833]: I0219 13:48:30.393758 4833 scope.go:117] "RemoveContainer" containerID="2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51" Feb 19 13:48:30 crc kubenswrapper[4833]: I0219 13:48:30.918332 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wbwwr_must-gather-j95pl_fb08c1f7-3c4c-4589-94be-936811a6f919/gather/0.log" Feb 19 13:48:38 crc kubenswrapper[4833]: I0219 13:48:38.376700 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-wbwwr/must-gather-j95pl"] Feb 19 13:48:38 crc kubenswrapper[4833]: I0219 13:48:38.377614 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-wbwwr/must-gather-j95pl" podUID="fb08c1f7-3c4c-4589-94be-936811a6f919" containerName="copy" containerID="cri-o://5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f" gracePeriod=2 Feb 19 13:48:38 crc kubenswrapper[4833]: I0219 13:48:38.387278 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-wbwwr/must-gather-j95pl"] Feb 19 13:48:38 crc kubenswrapper[4833]: I0219 13:48:38.849443 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wbwwr_must-gather-j95pl_fb08c1f7-3c4c-4589-94be-936811a6f919/copy/0.log" Feb 19 13:48:38 crc kubenswrapper[4833]: I0219 13:48:38.850600 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/must-gather-j95pl" Feb 19 13:48:38 crc kubenswrapper[4833]: I0219 13:48:38.871189 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb08c1f7-3c4c-4589-94be-936811a6f919-must-gather-output\") pod \"fb08c1f7-3c4c-4589-94be-936811a6f919\" (UID: \"fb08c1f7-3c4c-4589-94be-936811a6f919\") " Feb 19 13:48:38 crc kubenswrapper[4833]: I0219 13:48:38.871453 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g77bx\" (UniqueName: \"kubernetes.io/projected/fb08c1f7-3c4c-4589-94be-936811a6f919-kube-api-access-g77bx\") pod \"fb08c1f7-3c4c-4589-94be-936811a6f919\" (UID: \"fb08c1f7-3c4c-4589-94be-936811a6f919\") " Feb 19 13:48:38 crc kubenswrapper[4833]: I0219 13:48:38.883946 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb08c1f7-3c4c-4589-94be-936811a6f919-kube-api-access-g77bx" (OuterVolumeSpecName: "kube-api-access-g77bx") pod "fb08c1f7-3c4c-4589-94be-936811a6f919" (UID: "fb08c1f7-3c4c-4589-94be-936811a6f919"). InnerVolumeSpecName "kube-api-access-g77bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:48:38 crc kubenswrapper[4833]: I0219 13:48:38.976195 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g77bx\" (UniqueName: \"kubernetes.io/projected/fb08c1f7-3c4c-4589-94be-936811a6f919-kube-api-access-g77bx\") on node \"crc\" DevicePath \"\"" Feb 19 13:48:39 crc kubenswrapper[4833]: I0219 13:48:39.058520 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb08c1f7-3c4c-4589-94be-936811a6f919-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "fb08c1f7-3c4c-4589-94be-936811a6f919" (UID: "fb08c1f7-3c4c-4589-94be-936811a6f919"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:48:39 crc kubenswrapper[4833]: I0219 13:48:39.077424 4833 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/fb08c1f7-3c4c-4589-94be-936811a6f919-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 13:48:39 crc kubenswrapper[4833]: I0219 13:48:39.492071 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-wbwwr_must-gather-j95pl_fb08c1f7-3c4c-4589-94be-936811a6f919/copy/0.log" Feb 19 13:48:39 crc kubenswrapper[4833]: I0219 13:48:39.492512 4833 generic.go:334] "Generic (PLEG): container finished" podID="fb08c1f7-3c4c-4589-94be-936811a6f919" containerID="5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f" exitCode=143 Feb 19 13:48:39 crc kubenswrapper[4833]: I0219 13:48:39.492563 4833 scope.go:117] "RemoveContainer" containerID="5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f" Feb 19 13:48:39 crc kubenswrapper[4833]: I0219 13:48:39.492578 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wbwwr/must-gather-j95pl" Feb 19 13:48:39 crc kubenswrapper[4833]: I0219 13:48:39.519531 4833 scope.go:117] "RemoveContainer" containerID="2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51" Feb 19 13:48:39 crc kubenswrapper[4833]: I0219 13:48:39.585864 4833 scope.go:117] "RemoveContainer" containerID="5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f" Feb 19 13:48:39 crc kubenswrapper[4833]: E0219 13:48:39.586356 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f\": container with ID starting with 5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f not found: ID does not exist" containerID="5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f" Feb 19 13:48:39 crc kubenswrapper[4833]: I0219 13:48:39.586387 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f"} err="failed to get container status \"5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f\": rpc error: code = NotFound desc = could not find container \"5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f\": container with ID starting with 5e5c37fc3be61022381bb0fc29d1ebbe557b8ffe76b82f1d7327042285d07f5f not found: ID does not exist" Feb 19 13:48:39 crc kubenswrapper[4833]: I0219 13:48:39.586407 4833 scope.go:117] "RemoveContainer" containerID="2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51" Feb 19 13:48:39 crc kubenswrapper[4833]: E0219 13:48:39.586836 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51\": container with ID starting with 2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51 not found: ID does not exist" containerID="2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51" Feb 19 13:48:39 crc kubenswrapper[4833]: I0219 13:48:39.586882 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51"} err="failed to get container status \"2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51\": rpc error: code = NotFound desc = could not find container \"2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51\": container with ID starting with 2be4068635c96afe24c327f504de9e3307bdcadef7e1b68a5f2d6a28e792fd51 not found: ID does not exist" Feb 19 13:48:40 crc kubenswrapper[4833]: I0219 13:48:40.326570 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb08c1f7-3c4c-4589-94be-936811a6f919" path="/var/lib/kubelet/pods/fb08c1f7-3c4c-4589-94be-936811a6f919/volumes" Feb 19 13:48:45 crc kubenswrapper[4833]: I0219 13:48:45.744856 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:48:45 crc kubenswrapper[4833]: I0219 13:48:45.745342 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:49:15 crc kubenswrapper[4833]: I0219 13:49:15.744684 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:49:15 crc kubenswrapper[4833]: I0219 13:49:15.745290 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:49:45 crc kubenswrapper[4833]: I0219 13:49:45.744967 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:49:45 crc kubenswrapper[4833]: I0219 13:49:45.746629 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:49:45 crc kubenswrapper[4833]: I0219 13:49:45.746736 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 13:49:45 crc kubenswrapper[4833]: I0219 13:49:45.747544 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:49:45 crc kubenswrapper[4833]: I0219 13:49:45.747609 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" gracePeriod=600 Feb 19 13:49:45 crc kubenswrapper[4833]: E0219 13:49:45.888255 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:49:46 crc kubenswrapper[4833]: I0219 13:49:46.194259 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" exitCode=0 Feb 19 13:49:46 crc kubenswrapper[4833]: I0219 13:49:46.194331 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509"} Feb 19 13:49:46 crc kubenswrapper[4833]: I0219 13:49:46.194626 4833 scope.go:117] "RemoveContainer" containerID="dadbbd04deb7bfa91640633352760633123f4c60efb49cdd6b578eff95c9ffcc" Feb 19 13:49:46 crc kubenswrapper[4833]: I0219 13:49:46.195320 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:49:46 crc kubenswrapper[4833]: E0219 13:49:46.195893 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:49:58 crc kubenswrapper[4833]: I0219 13:49:58.315021 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:49:58 crc kubenswrapper[4833]: E0219 13:49:58.316135 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:50:08 crc kubenswrapper[4833]: I0219 13:50:08.511604 4833 scope.go:117] "RemoveContainer" containerID="7f66495b4cf455fc1702c0afd83fd6bc3df83b3ba55574bbb26e26308aa8590c" Feb 19 13:50:09 crc kubenswrapper[4833]: I0219 13:50:09.316063 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:50:09 crc kubenswrapper[4833]: E0219 13:50:09.316637 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:50:21 crc kubenswrapper[4833]: I0219 13:50:21.315475 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:50:21 crc kubenswrapper[4833]: E0219 13:50:21.316404 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:50:36 crc kubenswrapper[4833]: I0219 13:50:36.315665 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:50:36 crc kubenswrapper[4833]: E0219 13:50:36.316548 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:50:51 crc kubenswrapper[4833]: I0219 13:50:51.319139 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:50:51 crc kubenswrapper[4833]: E0219 13:50:51.320134 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:51:06 crc kubenswrapper[4833]: I0219 13:51:06.315635 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:51:06 crc kubenswrapper[4833]: E0219 13:51:06.317389 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:51:18 crc kubenswrapper[4833]: I0219 13:51:18.315135 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:51:18 crc kubenswrapper[4833]: E0219 13:51:18.316598 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:51:33 crc kubenswrapper[4833]: I0219 13:51:33.315284 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:51:33 crc kubenswrapper[4833]: E0219 13:51:33.318199 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.422432 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8wn8p/must-gather-q2nz7"] Feb 19 13:51:37 crc kubenswrapper[4833]: E0219 13:51:37.423429 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3045198-fd27-4a56-bced-66cdddc4593a" containerName="extract-utilities" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423442 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3045198-fd27-4a56-bced-66cdddc4593a" containerName="extract-utilities" Feb 19 13:51:37 crc kubenswrapper[4833]: E0219 13:51:37.423455 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb08c1f7-3c4c-4589-94be-936811a6f919" containerName="gather" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423461 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb08c1f7-3c4c-4589-94be-936811a6f919" containerName="gather" Feb 19 13:51:37 crc kubenswrapper[4833]: E0219 13:51:37.423474 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3045198-fd27-4a56-bced-66cdddc4593a" containerName="extract-content" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423481 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3045198-fd27-4a56-bced-66cdddc4593a" containerName="extract-content" Feb 19 13:51:37 crc kubenswrapper[4833]: E0219 13:51:37.423516 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" containerName="extract-utilities" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423522 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" containerName="extract-utilities" Feb 19 13:51:37 crc kubenswrapper[4833]: E0219 13:51:37.423536 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" containerName="extract-content" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423541 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" containerName="extract-content" Feb 19 13:51:37 crc kubenswrapper[4833]: E0219 13:51:37.423554 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3045198-fd27-4a56-bced-66cdddc4593a" containerName="registry-server" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423560 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3045198-fd27-4a56-bced-66cdddc4593a" containerName="registry-server" Feb 19 13:51:37 crc kubenswrapper[4833]: E0219 13:51:37.423578 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb08c1f7-3c4c-4589-94be-936811a6f919" containerName="copy" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423584 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb08c1f7-3c4c-4589-94be-936811a6f919" containerName="copy" Feb 19 13:51:37 crc kubenswrapper[4833]: E0219 13:51:37.423598 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" containerName="registry-server" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423604 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" containerName="registry-server" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423799 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb08c1f7-3c4c-4589-94be-936811a6f919" containerName="copy" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423820 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb08c1f7-3c4c-4589-94be-936811a6f919" containerName="gather" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423832 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3045198-fd27-4a56-bced-66cdddc4593a" containerName="registry-server" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.423845 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbca41d8-40d9-4706-9a16-7d2e3f54c0ff" containerName="registry-server" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.424883 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/must-gather-q2nz7" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.426819 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8wn8p"/"default-dockercfg-544fs" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.431035 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8wn8p"/"openshift-service-ca.crt" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.433324 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8wn8p"/"kube-root-ca.crt" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.436552 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8wn8p/must-gather-q2nz7"] Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.512783 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr5sl\" (UniqueName: \"kubernetes.io/projected/f637b54a-2e35-4f05-a5cf-204c9cc9154a-kube-api-access-fr5sl\") pod \"must-gather-q2nz7\" (UID: \"f637b54a-2e35-4f05-a5cf-204c9cc9154a\") " pod="openshift-must-gather-8wn8p/must-gather-q2nz7" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.512933 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f637b54a-2e35-4f05-a5cf-204c9cc9154a-must-gather-output\") pod \"must-gather-q2nz7\" (UID: \"f637b54a-2e35-4f05-a5cf-204c9cc9154a\") " pod="openshift-must-gather-8wn8p/must-gather-q2nz7" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.614897 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f637b54a-2e35-4f05-a5cf-204c9cc9154a-must-gather-output\") pod \"must-gather-q2nz7\" (UID: \"f637b54a-2e35-4f05-a5cf-204c9cc9154a\") " pod="openshift-must-gather-8wn8p/must-gather-q2nz7" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.615109 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr5sl\" (UniqueName: \"kubernetes.io/projected/f637b54a-2e35-4f05-a5cf-204c9cc9154a-kube-api-access-fr5sl\") pod \"must-gather-q2nz7\" (UID: \"f637b54a-2e35-4f05-a5cf-204c9cc9154a\") " pod="openshift-must-gather-8wn8p/must-gather-q2nz7" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.615543 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f637b54a-2e35-4f05-a5cf-204c9cc9154a-must-gather-output\") pod \"must-gather-q2nz7\" (UID: \"f637b54a-2e35-4f05-a5cf-204c9cc9154a\") " pod="openshift-must-gather-8wn8p/must-gather-q2nz7" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.636925 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr5sl\" (UniqueName: \"kubernetes.io/projected/f637b54a-2e35-4f05-a5cf-204c9cc9154a-kube-api-access-fr5sl\") pod \"must-gather-q2nz7\" (UID: \"f637b54a-2e35-4f05-a5cf-204c9cc9154a\") " pod="openshift-must-gather-8wn8p/must-gather-q2nz7" Feb 19 13:51:37 crc kubenswrapper[4833]: I0219 13:51:37.740422 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/must-gather-q2nz7" Feb 19 13:51:38 crc kubenswrapper[4833]: I0219 13:51:38.219044 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8wn8p/must-gather-q2nz7"] Feb 19 13:51:38 crc kubenswrapper[4833]: I0219 13:51:38.533885 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8wn8p/must-gather-q2nz7" event={"ID":"f637b54a-2e35-4f05-a5cf-204c9cc9154a","Type":"ContainerStarted","Data":"01cdd0c7872b184d1ab43cdc545d662e1458eb0fda35e29cd054be31df46eb7f"} Feb 19 13:51:39 crc kubenswrapper[4833]: I0219 13:51:39.545592 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8wn8p/must-gather-q2nz7" event={"ID":"f637b54a-2e35-4f05-a5cf-204c9cc9154a","Type":"ContainerStarted","Data":"9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459"} Feb 19 13:51:39 crc kubenswrapper[4833]: I0219 13:51:39.545874 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8wn8p/must-gather-q2nz7" event={"ID":"f637b54a-2e35-4f05-a5cf-204c9cc9154a","Type":"ContainerStarted","Data":"c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f"} Feb 19 13:51:39 crc kubenswrapper[4833]: I0219 13:51:39.570443 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8wn8p/must-gather-q2nz7" podStartSLOduration=2.570424393 podStartE2EDuration="2.570424393s" podCreationTimestamp="2026-02-19 13:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:51:39.559478714 +0000 UTC m=+3909.954997482" watchObservedRunningTime="2026-02-19 13:51:39.570424393 +0000 UTC m=+3909.965943151" Feb 19 13:51:42 crc kubenswrapper[4833]: I0219 13:51:42.645278 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8wn8p/crc-debug-htsq6"] Feb 19 13:51:42 crc kubenswrapper[4833]: I0219 13:51:42.646956 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-htsq6" Feb 19 13:51:42 crc kubenswrapper[4833]: I0219 13:51:42.716912 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5b7df4a-eefd-4067-aae6-3bac30a40466-host\") pod \"crc-debug-htsq6\" (UID: \"c5b7df4a-eefd-4067-aae6-3bac30a40466\") " pod="openshift-must-gather-8wn8p/crc-debug-htsq6" Feb 19 13:51:42 crc kubenswrapper[4833]: I0219 13:51:42.717194 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rdp2\" (UniqueName: \"kubernetes.io/projected/c5b7df4a-eefd-4067-aae6-3bac30a40466-kube-api-access-4rdp2\") pod \"crc-debug-htsq6\" (UID: \"c5b7df4a-eefd-4067-aae6-3bac30a40466\") " pod="openshift-must-gather-8wn8p/crc-debug-htsq6" Feb 19 13:51:42 crc kubenswrapper[4833]: I0219 13:51:42.819390 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5b7df4a-eefd-4067-aae6-3bac30a40466-host\") pod \"crc-debug-htsq6\" (UID: \"c5b7df4a-eefd-4067-aae6-3bac30a40466\") " pod="openshift-must-gather-8wn8p/crc-debug-htsq6" Feb 19 13:51:42 crc kubenswrapper[4833]: I0219 13:51:42.819441 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rdp2\" (UniqueName: \"kubernetes.io/projected/c5b7df4a-eefd-4067-aae6-3bac30a40466-kube-api-access-4rdp2\") pod \"crc-debug-htsq6\" (UID: \"c5b7df4a-eefd-4067-aae6-3bac30a40466\") " pod="openshift-must-gather-8wn8p/crc-debug-htsq6" Feb 19 13:51:42 crc kubenswrapper[4833]: I0219 13:51:42.819917 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5b7df4a-eefd-4067-aae6-3bac30a40466-host\") pod \"crc-debug-htsq6\" (UID: \"c5b7df4a-eefd-4067-aae6-3bac30a40466\") " pod="openshift-must-gather-8wn8p/crc-debug-htsq6" Feb 19 13:51:42 crc kubenswrapper[4833]: I0219 13:51:42.852159 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rdp2\" (UniqueName: \"kubernetes.io/projected/c5b7df4a-eefd-4067-aae6-3bac30a40466-kube-api-access-4rdp2\") pod \"crc-debug-htsq6\" (UID: \"c5b7df4a-eefd-4067-aae6-3bac30a40466\") " pod="openshift-must-gather-8wn8p/crc-debug-htsq6" Feb 19 13:51:42 crc kubenswrapper[4833]: I0219 13:51:42.969660 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-htsq6" Feb 19 13:51:43 crc kubenswrapper[4833]: W0219 13:51:43.010575 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5b7df4a_eefd_4067_aae6_3bac30a40466.slice/crio-4065d835a7a63c23b34a8a432d30981ffb7cf747549d31b1140608465c8125ac WatchSource:0}: Error finding container 4065d835a7a63c23b34a8a432d30981ffb7cf747549d31b1140608465c8125ac: Status 404 returned error can't find the container with id 4065d835a7a63c23b34a8a432d30981ffb7cf747549d31b1140608465c8125ac Feb 19 13:51:43 crc kubenswrapper[4833]: I0219 13:51:43.578734 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8wn8p/crc-debug-htsq6" event={"ID":"c5b7df4a-eefd-4067-aae6-3bac30a40466","Type":"ContainerStarted","Data":"b45ffd0eaf0c9d58e2723f3347663af24c81be8279b5f854dcbd610ae0e6e477"} Feb 19 13:51:43 crc kubenswrapper[4833]: I0219 13:51:43.579053 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8wn8p/crc-debug-htsq6" event={"ID":"c5b7df4a-eefd-4067-aae6-3bac30a40466","Type":"ContainerStarted","Data":"4065d835a7a63c23b34a8a432d30981ffb7cf747549d31b1140608465c8125ac"} Feb 19 13:51:43 crc kubenswrapper[4833]: I0219 13:51:43.592852 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8wn8p/crc-debug-htsq6" podStartSLOduration=1.592836701 podStartE2EDuration="1.592836701s" podCreationTimestamp="2026-02-19 13:51:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 13:51:43.590320254 +0000 UTC m=+3913.985839012" watchObservedRunningTime="2026-02-19 13:51:43.592836701 +0000 UTC m=+3913.988355469" Feb 19 13:51:48 crc kubenswrapper[4833]: I0219 13:51:48.333875 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:51:48 crc kubenswrapper[4833]: E0219 13:51:48.334943 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:52:01 crc kubenswrapper[4833]: I0219 13:52:01.314927 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:52:01 crc kubenswrapper[4833]: E0219 13:52:01.315829 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:52:15 crc kubenswrapper[4833]: I0219 13:52:15.315449 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:52:15 crc kubenswrapper[4833]: E0219 13:52:15.316966 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:52:16 crc kubenswrapper[4833]: I0219 13:52:16.852950 4833 generic.go:334] "Generic (PLEG): container finished" podID="c5b7df4a-eefd-4067-aae6-3bac30a40466" containerID="b45ffd0eaf0c9d58e2723f3347663af24c81be8279b5f854dcbd610ae0e6e477" exitCode=0 Feb 19 13:52:16 crc kubenswrapper[4833]: I0219 13:52:16.853052 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8wn8p/crc-debug-htsq6" event={"ID":"c5b7df4a-eefd-4067-aae6-3bac30a40466","Type":"ContainerDied","Data":"b45ffd0eaf0c9d58e2723f3347663af24c81be8279b5f854dcbd610ae0e6e477"} Feb 19 13:52:17 crc kubenswrapper[4833]: I0219 13:52:17.951881 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-htsq6" Feb 19 13:52:17 crc kubenswrapper[4833]: I0219 13:52:17.998690 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5b7df4a-eefd-4067-aae6-3bac30a40466-host\") pod \"c5b7df4a-eefd-4067-aae6-3bac30a40466\" (UID: \"c5b7df4a-eefd-4067-aae6-3bac30a40466\") " Feb 19 13:52:17 crc kubenswrapper[4833]: I0219 13:52:17.998838 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5b7df4a-eefd-4067-aae6-3bac30a40466-host" (OuterVolumeSpecName: "host") pod "c5b7df4a-eefd-4067-aae6-3bac30a40466" (UID: "c5b7df4a-eefd-4067-aae6-3bac30a40466"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:52:17 crc kubenswrapper[4833]: I0219 13:52:17.998990 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rdp2\" (UniqueName: \"kubernetes.io/projected/c5b7df4a-eefd-4067-aae6-3bac30a40466-kube-api-access-4rdp2\") pod \"c5b7df4a-eefd-4067-aae6-3bac30a40466\" (UID: \"c5b7df4a-eefd-4067-aae6-3bac30a40466\") " Feb 19 13:52:17 crc kubenswrapper[4833]: I0219 13:52:17.999731 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5b7df4a-eefd-4067-aae6-3bac30a40466-host\") on node \"crc\" DevicePath \"\"" Feb 19 13:52:18 crc kubenswrapper[4833]: I0219 13:52:18.009038 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8wn8p/crc-debug-htsq6"] Feb 19 13:52:18 crc kubenswrapper[4833]: I0219 13:52:18.010676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b7df4a-eefd-4067-aae6-3bac30a40466-kube-api-access-4rdp2" (OuterVolumeSpecName: "kube-api-access-4rdp2") pod "c5b7df4a-eefd-4067-aae6-3bac30a40466" (UID: "c5b7df4a-eefd-4067-aae6-3bac30a40466"). InnerVolumeSpecName "kube-api-access-4rdp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:52:18 crc kubenswrapper[4833]: I0219 13:52:18.018675 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8wn8p/crc-debug-htsq6"] Feb 19 13:52:18 crc kubenswrapper[4833]: I0219 13:52:18.101663 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rdp2\" (UniqueName: \"kubernetes.io/projected/c5b7df4a-eefd-4067-aae6-3bac30a40466-kube-api-access-4rdp2\") on node \"crc\" DevicePath \"\"" Feb 19 13:52:18 crc kubenswrapper[4833]: I0219 13:52:18.337419 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b7df4a-eefd-4067-aae6-3bac30a40466" path="/var/lib/kubelet/pods/c5b7df4a-eefd-4067-aae6-3bac30a40466/volumes" Feb 19 13:52:18 crc kubenswrapper[4833]: I0219 13:52:18.869212 4833 scope.go:117] "RemoveContainer" containerID="b45ffd0eaf0c9d58e2723f3347663af24c81be8279b5f854dcbd610ae0e6e477" Feb 19 13:52:18 crc kubenswrapper[4833]: I0219 13:52:18.869304 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-htsq6" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.209910 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8wn8p/crc-debug-wzzhc"] Feb 19 13:52:19 crc kubenswrapper[4833]: E0219 13:52:19.210670 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b7df4a-eefd-4067-aae6-3bac30a40466" containerName="container-00" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.210685 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b7df4a-eefd-4067-aae6-3bac30a40466" containerName="container-00" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.211013 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b7df4a-eefd-4067-aae6-3bac30a40466" containerName="container-00" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.211717 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.324676 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3bbfac76-ff71-4567-aeac-bfe42d377b4d-host\") pod \"crc-debug-wzzhc\" (UID: \"3bbfac76-ff71-4567-aeac-bfe42d377b4d\") " pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.324784 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmqpq\" (UniqueName: \"kubernetes.io/projected/3bbfac76-ff71-4567-aeac-bfe42d377b4d-kube-api-access-xmqpq\") pod \"crc-debug-wzzhc\" (UID: \"3bbfac76-ff71-4567-aeac-bfe42d377b4d\") " pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.426143 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmqpq\" (UniqueName: \"kubernetes.io/projected/3bbfac76-ff71-4567-aeac-bfe42d377b4d-kube-api-access-xmqpq\") pod \"crc-debug-wzzhc\" (UID: \"3bbfac76-ff71-4567-aeac-bfe42d377b4d\") " pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.426314 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3bbfac76-ff71-4567-aeac-bfe42d377b4d-host\") pod \"crc-debug-wzzhc\" (UID: \"3bbfac76-ff71-4567-aeac-bfe42d377b4d\") " pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.426438 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3bbfac76-ff71-4567-aeac-bfe42d377b4d-host\") pod \"crc-debug-wzzhc\" (UID: \"3bbfac76-ff71-4567-aeac-bfe42d377b4d\") " pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.462284 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmqpq\" (UniqueName: \"kubernetes.io/projected/3bbfac76-ff71-4567-aeac-bfe42d377b4d-kube-api-access-xmqpq\") pod \"crc-debug-wzzhc\" (UID: \"3bbfac76-ff71-4567-aeac-bfe42d377b4d\") " pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.526099 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.877226 4833 generic.go:334] "Generic (PLEG): container finished" podID="3bbfac76-ff71-4567-aeac-bfe42d377b4d" containerID="87bfdd3973398f8bd7fea8930c76be985a984e52f0691747bb292184b967b9a6" exitCode=0 Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.877311 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" event={"ID":"3bbfac76-ff71-4567-aeac-bfe42d377b4d","Type":"ContainerDied","Data":"87bfdd3973398f8bd7fea8930c76be985a984e52f0691747bb292184b967b9a6"} Feb 19 13:52:19 crc kubenswrapper[4833]: I0219 13:52:19.877711 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" event={"ID":"3bbfac76-ff71-4567-aeac-bfe42d377b4d","Type":"ContainerStarted","Data":"313509e6714daeb796334496d1e8e57b069fd7f3091dc70ff0378f08b4d7897f"} Feb 19 13:52:20 crc kubenswrapper[4833]: I0219 13:52:20.248823 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8wn8p/crc-debug-wzzhc"] Feb 19 13:52:20 crc kubenswrapper[4833]: I0219 13:52:20.256292 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8wn8p/crc-debug-wzzhc"] Feb 19 13:52:20 crc kubenswrapper[4833]: I0219 13:52:20.994252 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.052121 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3bbfac76-ff71-4567-aeac-bfe42d377b4d-host\") pod \"3bbfac76-ff71-4567-aeac-bfe42d377b4d\" (UID: \"3bbfac76-ff71-4567-aeac-bfe42d377b4d\") " Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.052279 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bbfac76-ff71-4567-aeac-bfe42d377b4d-host" (OuterVolumeSpecName: "host") pod "3bbfac76-ff71-4567-aeac-bfe42d377b4d" (UID: "3bbfac76-ff71-4567-aeac-bfe42d377b4d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.052798 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmqpq\" (UniqueName: \"kubernetes.io/projected/3bbfac76-ff71-4567-aeac-bfe42d377b4d-kube-api-access-xmqpq\") pod \"3bbfac76-ff71-4567-aeac-bfe42d377b4d\" (UID: \"3bbfac76-ff71-4567-aeac-bfe42d377b4d\") " Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.053286 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3bbfac76-ff71-4567-aeac-bfe42d377b4d-host\") on node \"crc\" DevicePath \"\"" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.061890 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bbfac76-ff71-4567-aeac-bfe42d377b4d-kube-api-access-xmqpq" (OuterVolumeSpecName: "kube-api-access-xmqpq") pod "3bbfac76-ff71-4567-aeac-bfe42d377b4d" (UID: "3bbfac76-ff71-4567-aeac-bfe42d377b4d"). InnerVolumeSpecName "kube-api-access-xmqpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.156529 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmqpq\" (UniqueName: \"kubernetes.io/projected/3bbfac76-ff71-4567-aeac-bfe42d377b4d-kube-api-access-xmqpq\") on node \"crc\" DevicePath \"\"" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.454206 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8wn8p/crc-debug-m8tqn"] Feb 19 13:52:21 crc kubenswrapper[4833]: E0219 13:52:21.454629 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bbfac76-ff71-4567-aeac-bfe42d377b4d" containerName="container-00" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.454645 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bbfac76-ff71-4567-aeac-bfe42d377b4d" containerName="container-00" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.454862 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bbfac76-ff71-4567-aeac-bfe42d377b4d" containerName="container-00" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.455426 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.561940 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjrrb\" (UniqueName: \"kubernetes.io/projected/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-kube-api-access-sjrrb\") pod \"crc-debug-m8tqn\" (UID: \"5a447734-b60e-4ae3-bc4f-5534ab5d8afb\") " pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.562236 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-host\") pod \"crc-debug-m8tqn\" (UID: \"5a447734-b60e-4ae3-bc4f-5534ab5d8afb\") " pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.663631 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjrrb\" (UniqueName: \"kubernetes.io/projected/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-kube-api-access-sjrrb\") pod \"crc-debug-m8tqn\" (UID: \"5a447734-b60e-4ae3-bc4f-5534ab5d8afb\") " pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.663881 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-host\") pod \"crc-debug-m8tqn\" (UID: \"5a447734-b60e-4ae3-bc4f-5534ab5d8afb\") " pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.664211 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-host\") pod \"crc-debug-m8tqn\" (UID: \"5a447734-b60e-4ae3-bc4f-5534ab5d8afb\") " pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.688753 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjrrb\" (UniqueName: \"kubernetes.io/projected/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-kube-api-access-sjrrb\") pod \"crc-debug-m8tqn\" (UID: \"5a447734-b60e-4ae3-bc4f-5534ab5d8afb\") " pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.769426 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" Feb 19 13:52:21 crc kubenswrapper[4833]: W0219 13:52:21.810181 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a447734_b60e_4ae3_bc4f_5534ab5d8afb.slice/crio-0a0c4acac5deb3160a20a5127b485c48de936fdc14e630fa27f4997727998d75 WatchSource:0}: Error finding container 0a0c4acac5deb3160a20a5127b485c48de936fdc14e630fa27f4997727998d75: Status 404 returned error can't find the container with id 0a0c4acac5deb3160a20a5127b485c48de936fdc14e630fa27f4997727998d75 Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.895701 4833 scope.go:117] "RemoveContainer" containerID="87bfdd3973398f8bd7fea8930c76be985a984e52f0691747bb292184b967b9a6" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.895833 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-wzzhc" Feb 19 13:52:21 crc kubenswrapper[4833]: I0219 13:52:21.901638 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" event={"ID":"5a447734-b60e-4ae3-bc4f-5534ab5d8afb","Type":"ContainerStarted","Data":"0a0c4acac5deb3160a20a5127b485c48de936fdc14e630fa27f4997727998d75"} Feb 19 13:52:22 crc kubenswrapper[4833]: I0219 13:52:22.327949 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bbfac76-ff71-4567-aeac-bfe42d377b4d" path="/var/lib/kubelet/pods/3bbfac76-ff71-4567-aeac-bfe42d377b4d/volumes" Feb 19 13:52:22 crc kubenswrapper[4833]: I0219 13:52:22.913362 4833 generic.go:334] "Generic (PLEG): container finished" podID="5a447734-b60e-4ae3-bc4f-5534ab5d8afb" containerID="0bf47d5f27a296cef2be1d11b3371a840a8ea1c93958272f521ed8f80b797a02" exitCode=0 Feb 19 13:52:22 crc kubenswrapper[4833]: I0219 13:52:22.913442 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" event={"ID":"5a447734-b60e-4ae3-bc4f-5534ab5d8afb","Type":"ContainerDied","Data":"0bf47d5f27a296cef2be1d11b3371a840a8ea1c93958272f521ed8f80b797a02"} Feb 19 13:52:22 crc kubenswrapper[4833]: I0219 13:52:22.975213 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8wn8p/crc-debug-m8tqn"] Feb 19 13:52:22 crc kubenswrapper[4833]: I0219 13:52:22.985759 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8wn8p/crc-debug-m8tqn"] Feb 19 13:52:24 crc kubenswrapper[4833]: I0219 13:52:24.042095 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" Feb 19 13:52:24 crc kubenswrapper[4833]: I0219 13:52:24.105958 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjrrb\" (UniqueName: \"kubernetes.io/projected/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-kube-api-access-sjrrb\") pod \"5a447734-b60e-4ae3-bc4f-5534ab5d8afb\" (UID: \"5a447734-b60e-4ae3-bc4f-5534ab5d8afb\") " Feb 19 13:52:24 crc kubenswrapper[4833]: I0219 13:52:24.106041 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-host\") pod \"5a447734-b60e-4ae3-bc4f-5534ab5d8afb\" (UID: \"5a447734-b60e-4ae3-bc4f-5534ab5d8afb\") " Feb 19 13:52:24 crc kubenswrapper[4833]: I0219 13:52:24.106201 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-host" (OuterVolumeSpecName: "host") pod "5a447734-b60e-4ae3-bc4f-5534ab5d8afb" (UID: "5a447734-b60e-4ae3-bc4f-5534ab5d8afb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 19 13:52:24 crc kubenswrapper[4833]: I0219 13:52:24.106549 4833 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-host\") on node \"crc\" DevicePath \"\"" Feb 19 13:52:24 crc kubenswrapper[4833]: I0219 13:52:24.111646 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-kube-api-access-sjrrb" (OuterVolumeSpecName: "kube-api-access-sjrrb") pod "5a447734-b60e-4ae3-bc4f-5534ab5d8afb" (UID: "5a447734-b60e-4ae3-bc4f-5534ab5d8afb"). InnerVolumeSpecName "kube-api-access-sjrrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:52:24 crc kubenswrapper[4833]: I0219 13:52:24.208114 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjrrb\" (UniqueName: \"kubernetes.io/projected/5a447734-b60e-4ae3-bc4f-5534ab5d8afb-kube-api-access-sjrrb\") on node \"crc\" DevicePath \"\"" Feb 19 13:52:24 crc kubenswrapper[4833]: I0219 13:52:24.324960 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a447734-b60e-4ae3-bc4f-5534ab5d8afb" path="/var/lib/kubelet/pods/5a447734-b60e-4ae3-bc4f-5534ab5d8afb/volumes" Feb 19 13:52:24 crc kubenswrapper[4833]: I0219 13:52:24.931143 4833 scope.go:117] "RemoveContainer" containerID="0bf47d5f27a296cef2be1d11b3371a840a8ea1c93958272f521ed8f80b797a02" Feb 19 13:52:24 crc kubenswrapper[4833]: I0219 13:52:24.931257 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/crc-debug-m8tqn" Feb 19 13:52:26 crc kubenswrapper[4833]: I0219 13:52:26.315414 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:52:26 crc kubenswrapper[4833]: E0219 13:52:26.316018 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:52:37 crc kubenswrapper[4833]: I0219 13:52:37.315753 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:52:37 crc kubenswrapper[4833]: E0219 13:52:37.316587 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:52:49 crc kubenswrapper[4833]: I0219 13:52:49.314794 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:52:49 crc kubenswrapper[4833]: E0219 13:52:49.338983 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:52:54 crc kubenswrapper[4833]: I0219 13:52:54.784043 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8468b886b8-mz8xd_ad7485d9-4e14-49c1-bf60-8a0146d26df0/barbican-api/0.log" Feb 19 13:52:54 crc kubenswrapper[4833]: I0219 13:52:54.975129 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-8468b886b8-mz8xd_ad7485d9-4e14-49c1-bf60-8a0146d26df0/barbican-api-log/0.log" Feb 19 13:52:55 crc kubenswrapper[4833]: I0219 13:52:55.060366 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56b45cfbd8-5dfjr_f30e4d86-a08b-4021-8c83-3fb5abe86152/barbican-keystone-listener/0.log" Feb 19 13:52:55 crc kubenswrapper[4833]: I0219 13:52:55.099698 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56b45cfbd8-5dfjr_f30e4d86-a08b-4021-8c83-3fb5abe86152/barbican-keystone-listener-log/0.log" Feb 19 13:52:55 crc kubenswrapper[4833]: I0219 13:52:55.253554 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-546699bf4c-sqbpl_6595e595-9cbc-44bb-8629-a53da3b75bd6/barbican-worker/0.log" Feb 19 13:52:55 crc kubenswrapper[4833]: I0219 13:52:55.288629 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-546699bf4c-sqbpl_6595e595-9cbc-44bb-8629-a53da3b75bd6/barbican-worker-log/0.log" Feb 19 13:52:55 crc kubenswrapper[4833]: I0219 13:52:55.440179 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kngjp_71fc87a7-2568-481c-a841-6500a69ba8b9/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:52:55 crc kubenswrapper[4833]: I0219 13:52:55.532159 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8d9232be-2376-4ec9-9f32-16de9f8942d0/ceilometer-central-agent/0.log" Feb 19 13:52:55 crc kubenswrapper[4833]: I0219 13:52:55.586048 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8d9232be-2376-4ec9-9f32-16de9f8942d0/ceilometer-notification-agent/0.log" Feb 19 13:52:55 crc kubenswrapper[4833]: I0219 13:52:55.649788 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8d9232be-2376-4ec9-9f32-16de9f8942d0/proxy-httpd/0.log" Feb 19 13:52:55 crc kubenswrapper[4833]: I0219 13:52:55.721535 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8d9232be-2376-4ec9-9f32-16de9f8942d0/sg-core/0.log" Feb 19 13:52:55 crc kubenswrapper[4833]: I0219 13:52:55.828485 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ffac1cb-2dd7-4ff9-92e1-a41a23411f57/cinder-api/0.log" Feb 19 13:52:55 crc kubenswrapper[4833]: I0219 13:52:55.845784 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_8ffac1cb-2dd7-4ff9-92e1-a41a23411f57/cinder-api-log/0.log" Feb 19 13:52:56 crc kubenswrapper[4833]: I0219 13:52:56.009136 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_77866722-bd38-4757-b8a0-d2939b40d2ee/cinder-scheduler/0.log" Feb 19 13:52:56 crc kubenswrapper[4833]: I0219 13:52:56.037753 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_77866722-bd38-4757-b8a0-d2939b40d2ee/probe/0.log" Feb 19 13:52:56 crc kubenswrapper[4833]: I0219 13:52:56.255099 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-s6sb4_46a1f7d2-8e31-4ef8-8508-08be63d0fee2/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:52:56 crc kubenswrapper[4833]: I0219 13:52:56.284718 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-pwxsf_d1060705-48ca-43e4-8a72-0fbd655875a6/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:52:56 crc kubenswrapper[4833]: I0219 13:52:56.433583 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-lg95r_e9222a60-0b24-4d91-8002-74747339c9d5/init/0.log" Feb 19 13:52:56 crc kubenswrapper[4833]: I0219 13:52:56.624989 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-lg95r_e9222a60-0b24-4d91-8002-74747339c9d5/init/0.log" Feb 19 13:52:56 crc kubenswrapper[4833]: I0219 13:52:56.659093 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-gn8cb_ade3dcb5-7a6a-4bef-a706-01dbd2d074a1/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:52:56 crc kubenswrapper[4833]: I0219 13:52:56.689428 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-lg95r_e9222a60-0b24-4d91-8002-74747339c9d5/dnsmasq-dns/0.log" Feb 19 13:52:56 crc kubenswrapper[4833]: I0219 13:52:56.888955 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_42dcfe39-3d5b-4e0a-8b07-658ec7f665ba/glance-httpd/0.log" Feb 19 13:52:56 crc kubenswrapper[4833]: I0219 13:52:56.916151 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_42dcfe39-3d5b-4e0a-8b07-658ec7f665ba/glance-log/0.log" Feb 19 13:52:57 crc kubenswrapper[4833]: I0219 13:52:57.054264 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0f177d83-63c7-433e-aeb0-e8a91b6216f8/glance-httpd/0.log" Feb 19 13:52:57 crc kubenswrapper[4833]: I0219 13:52:57.092970 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_0f177d83-63c7-433e-aeb0-e8a91b6216f8/glance-log/0.log" Feb 19 13:52:57 crc kubenswrapper[4833]: I0219 13:52:57.196432 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b954444d4-2mwt9_88341f77-7fab-4dba-be1d-8e11becd2953/horizon/0.log" Feb 19 13:52:57 crc kubenswrapper[4833]: I0219 13:52:57.414300 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qtt5g_1abf51ed-df14-4ea8-a9df-e6ee9810e40e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:52:57 crc kubenswrapper[4833]: I0219 13:52:57.552520 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7b954444d4-2mwt9_88341f77-7fab-4dba-be1d-8e11becd2953/horizon-log/0.log" Feb 19 13:52:57 crc kubenswrapper[4833]: I0219 13:52:57.925075 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-vj2ln_56fbaae9-eaee-4f1d-99b6-53bc919ecb4b/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:52:58 crc kubenswrapper[4833]: I0219 13:52:58.088941 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d7f4a85b-484c-414d-969f-58baa362a1ff/kube-state-metrics/0.log" Feb 19 13:52:58 crc kubenswrapper[4833]: I0219 13:52:58.106045 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-54c7bb578f-26gwx_a880b98a-d4ab-49dd-bc84-ff52c67c5432/keystone-api/0.log" Feb 19 13:52:58 crc kubenswrapper[4833]: I0219 13:52:58.263379 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-dt2qv_c1571e5f-9c2d-4ac6-acc2-ad826ffd85a0/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:52:58 crc kubenswrapper[4833]: I0219 13:52:58.678224 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75865f57f7-4q4h9_cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7/neutron-httpd/0.log" Feb 19 13:52:58 crc kubenswrapper[4833]: I0219 13:52:58.739366 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75865f57f7-4q4h9_cfd8c3f4-cc9a-4ae5-b1a5-f4bf4bdd44a7/neutron-api/0.log" Feb 19 13:52:59 crc kubenswrapper[4833]: I0219 13:52:59.432166 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ch7jx_5e2ba26c-7bab-411e-80f6-bf1e77dce436/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:52:59 crc kubenswrapper[4833]: I0219 13:52:59.813346 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_70e644c8-55f1-4d68-8cfc-f4a12ed42ec2/nova-api-log/0.log" Feb 19 13:52:59 crc kubenswrapper[4833]: I0219 13:52:59.929562 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5cccac96-51b3-457e-86eb-bd59ce49b7cf/nova-cell0-conductor-conductor/0.log" Feb 19 13:53:00 crc kubenswrapper[4833]: I0219 13:53:00.206417 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_cff86803-41bf-463e-a5ea-30f70425a39a/nova-cell1-conductor-conductor/0.log" Feb 19 13:53:00 crc kubenswrapper[4833]: I0219 13:53:00.226429 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_70e644c8-55f1-4d68-8cfc-f4a12ed42ec2/nova-api-api/0.log" Feb 19 13:53:00 crc kubenswrapper[4833]: I0219 13:53:00.319385 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_77001968-5717-445a-b12e-a1318c720b23/nova-cell1-novncproxy-novncproxy/0.log" Feb 19 13:53:00 crc kubenswrapper[4833]: I0219 13:53:00.507328 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-rk95c_cf0f1512-542b-4358-b74b-57df19d9c7d3/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:53:00 crc kubenswrapper[4833]: I0219 13:53:00.584377 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_59a4389e-efff-4621-bc9d-548f8c2b78f9/nova-metadata-log/0.log" Feb 19 13:53:01 crc kubenswrapper[4833]: I0219 13:53:01.209065 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_b99a7fca-b744-4c37-abf6-76f23e90f7da/nova-scheduler-scheduler/0.log" Feb 19 13:53:01 crc kubenswrapper[4833]: I0219 13:53:01.220710 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_866102e5-b1c2-4f33-9c34-312be44faea7/mysql-bootstrap/0.log" Feb 19 13:53:01 crc kubenswrapper[4833]: I0219 13:53:01.327178 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_866102e5-b1c2-4f33-9c34-312be44faea7/mysql-bootstrap/0.log" Feb 19 13:53:01 crc kubenswrapper[4833]: I0219 13:53:01.648356 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_866102e5-b1c2-4f33-9c34-312be44faea7/galera/0.log" Feb 19 13:53:01 crc kubenswrapper[4833]: I0219 13:53:01.714898 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_679ec18d-1d70-4cc5-8103-b28f0809a45e/mysql-bootstrap/0.log" Feb 19 13:53:01 crc kubenswrapper[4833]: I0219 13:53:01.997963 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_679ec18d-1d70-4cc5-8103-b28f0809a45e/galera/0.log" Feb 19 13:53:02 crc kubenswrapper[4833]: I0219 13:53:02.017747 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_679ec18d-1d70-4cc5-8103-b28f0809a45e/mysql-bootstrap/0.log" Feb 19 13:53:02 crc kubenswrapper[4833]: I0219 13:53:02.183775 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_df72920d-e022-48f9-b41c-f2fe6ed14da9/openstackclient/0.log" Feb 19 13:53:02 crc kubenswrapper[4833]: I0219 13:53:02.294732 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-fbrgv_488bba31-e718-4ef1-bd04-6ed3fe165c89/ovn-controller/0.log" Feb 19 13:53:02 crc kubenswrapper[4833]: I0219 13:53:02.348013 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_59a4389e-efff-4621-bc9d-548f8c2b78f9/nova-metadata-metadata/0.log" Feb 19 13:53:02 crc kubenswrapper[4833]: I0219 13:53:02.466994 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-d6gmv_ba309e83-ab80-44b0-95a6-01034dfcca68/openstack-network-exporter/0.log" Feb 19 13:53:02 crc kubenswrapper[4833]: I0219 13:53:02.573968 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jlg7_afdbcf60-89d8-426e-9323-1347d5cb238f/ovsdb-server-init/0.log" Feb 19 13:53:02 crc kubenswrapper[4833]: I0219 13:53:02.812790 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jlg7_afdbcf60-89d8-426e-9323-1347d5cb238f/ovsdb-server-init/0.log" Feb 19 13:53:02 crc kubenswrapper[4833]: I0219 13:53:02.849792 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jlg7_afdbcf60-89d8-426e-9323-1347d5cb238f/ovs-vswitchd/0.log" Feb 19 13:53:02 crc kubenswrapper[4833]: I0219 13:53:02.880226 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-9jlg7_afdbcf60-89d8-426e-9323-1347d5cb238f/ovsdb-server/0.log" Feb 19 13:53:03 crc kubenswrapper[4833]: I0219 13:53:03.036614 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-6hdvq_3952291f-b3f9-4309-ae64-d6cbef7d6607/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:53:03 crc kubenswrapper[4833]: I0219 13:53:03.079126 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e175eae-03fe-4c4b-b5d2-96df10844449/openstack-network-exporter/0.log" Feb 19 13:53:03 crc kubenswrapper[4833]: I0219 13:53:03.151677 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1e175eae-03fe-4c4b-b5d2-96df10844449/ovn-northd/0.log" Feb 19 13:53:03 crc kubenswrapper[4833]: I0219 13:53:03.339371 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_55636a14-4194-419e-be9c-d4f8c4064d77/openstack-network-exporter/0.log" Feb 19 13:53:03 crc kubenswrapper[4833]: I0219 13:53:03.379816 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_55636a14-4194-419e-be9c-d4f8c4064d77/ovsdbserver-nb/0.log" Feb 19 13:53:03 crc kubenswrapper[4833]: I0219 13:53:03.549953 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1c5ea761-d056-4868-af9f-309486208889/openstack-network-exporter/0.log" Feb 19 13:53:03 crc kubenswrapper[4833]: I0219 13:53:03.583190 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1c5ea761-d056-4868-af9f-309486208889/ovsdbserver-sb/0.log" Feb 19 13:53:03 crc kubenswrapper[4833]: I0219 13:53:03.718295 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64f9d5d984-h9kbm_5f9f5174-162c-418a-8f37-09af448d7716/placement-api/0.log" Feb 19 13:53:03 crc kubenswrapper[4833]: I0219 13:53:03.813230 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-64f9d5d984-h9kbm_5f9f5174-162c-418a-8f37-09af448d7716/placement-log/0.log" Feb 19 13:53:03 crc kubenswrapper[4833]: I0219 13:53:03.913533 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52361cb4-eea4-49c7-b06b-acbe0ad24450/setup-container/0.log" Feb 19 13:53:04 crc kubenswrapper[4833]: I0219 13:53:04.096478 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52361cb4-eea4-49c7-b06b-acbe0ad24450/setup-container/0.log" Feb 19 13:53:04 crc kubenswrapper[4833]: I0219 13:53:04.113486 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_95192227-96aa-4fa8-a7db-89f31efb056c/setup-container/0.log" Feb 19 13:53:04 crc kubenswrapper[4833]: I0219 13:53:04.130658 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_52361cb4-eea4-49c7-b06b-acbe0ad24450/rabbitmq/0.log" Feb 19 13:53:04 crc kubenswrapper[4833]: I0219 13:53:04.315350 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:53:04 crc kubenswrapper[4833]: E0219 13:53:04.315706 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:53:04 crc kubenswrapper[4833]: I0219 13:53:04.364703 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_95192227-96aa-4fa8-a7db-89f31efb056c/setup-container/0.log" Feb 19 13:53:04 crc kubenswrapper[4833]: I0219 13:53:04.400800 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_95192227-96aa-4fa8-a7db-89f31efb056c/rabbitmq/0.log" Feb 19 13:53:04 crc kubenswrapper[4833]: I0219 13:53:04.498037 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-cmwbg_80c2b43f-f289-49a9-a544-08316b461536/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:53:04 crc kubenswrapper[4833]: I0219 13:53:04.640042 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xnzhk_810a4b4a-798a-4dbc-9f86-81377c37d104/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:53:04 crc kubenswrapper[4833]: I0219 13:53:04.751349 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-55wxt_0457ceaa-c998-49db-bfa7-f837bf684537/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:53:04 crc kubenswrapper[4833]: I0219 13:53:04.874507 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8s8fs_290637db-709b-4ce8-a200-76e9bf643d55/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:53:05 crc kubenswrapper[4833]: I0219 13:53:05.151324 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-fndfb_f25a8685-c4bf-460a-a553-e26d1ddc9d09/ssh-known-hosts-edpm-deployment/0.log" Feb 19 13:53:05 crc kubenswrapper[4833]: I0219 13:53:05.379964 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-756fd4958c-8cv9q_c148317b-fc12-4940-8fb0-587c8eff29f9/proxy-server/0.log" Feb 19 13:53:05 crc kubenswrapper[4833]: I0219 13:53:05.479701 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-p2m8p_46126eda-f691-4339-966c-615190176dea/swift-ring-rebalance/0.log" Feb 19 13:53:05 crc kubenswrapper[4833]: I0219 13:53:05.503147 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-756fd4958c-8cv9q_c148317b-fc12-4940-8fb0-587c8eff29f9/proxy-httpd/0.log" Feb 19 13:53:05 crc kubenswrapper[4833]: I0219 13:53:05.675139 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/account-auditor/0.log" Feb 19 13:53:05 crc kubenswrapper[4833]: I0219 13:53:05.712303 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/account-reaper/0.log" Feb 19 13:53:05 crc kubenswrapper[4833]: I0219 13:53:05.762595 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/account-replicator/0.log" Feb 19 13:53:05 crc kubenswrapper[4833]: I0219 13:53:05.942771 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/container-auditor/0.log" Feb 19 13:53:05 crc kubenswrapper[4833]: I0219 13:53:05.958878 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/account-server/0.log" Feb 19 13:53:05 crc kubenswrapper[4833]: I0219 13:53:05.966156 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/container-server/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.008574 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/container-replicator/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.140013 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/container-updater/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.173816 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/object-expirer/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.225798 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/object-replicator/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.228811 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/object-auditor/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.361367 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/object-server/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.430239 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/object-updater/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.447108 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/swift-recon-cron/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.487551 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_0dfc7a49-4c64-4c4c-b0a9-eea1d8734612/rsync/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.769388 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-7hjpc_3d0af35d-1268-4a37-a176-e2ca439c6ba6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.788048 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_fbca1583-1d12-4e49-bda3-864536093e85/tempest-tests-tempest-tests-runner/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.887745 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_398ff1ce-0aa5-4f20-9ff3-e21807d5771c/test-operator-logs-container/0.log" Feb 19 13:53:06 crc kubenswrapper[4833]: I0219 13:53:06.972178 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5hfnv_6533780f-2a0a-484f-afa5-ad561486e8a2/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 19 13:53:15 crc kubenswrapper[4833]: I0219 13:53:15.518616 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_8c04a00b-8613-472f-bf1b-e1d26ed34312/memcached/0.log" Feb 19 13:53:17 crc kubenswrapper[4833]: I0219 13:53:17.314684 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:53:17 crc kubenswrapper[4833]: E0219 13:53:17.315227 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:53:28 crc kubenswrapper[4833]: I0219 13:53:28.315181 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:53:28 crc kubenswrapper[4833]: E0219 13:53:28.316145 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:53:35 crc kubenswrapper[4833]: I0219 13:53:35.882609 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/util/0.log" Feb 19 13:53:36 crc kubenswrapper[4833]: I0219 13:53:36.095100 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/util/0.log" Feb 19 13:53:36 crc kubenswrapper[4833]: I0219 13:53:36.124094 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/pull/0.log" Feb 19 13:53:36 crc kubenswrapper[4833]: I0219 13:53:36.175553 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/pull/0.log" Feb 19 13:53:36 crc kubenswrapper[4833]: I0219 13:53:36.938505 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/util/0.log" Feb 19 13:53:36 crc kubenswrapper[4833]: I0219 13:53:36.977003 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/pull/0.log" Feb 19 13:53:36 crc kubenswrapper[4833]: I0219 13:53:36.994983 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_05efc0262915a42ac472d6ca085c71af2c06c61e0e0ba85f7417357aefcbxst_095e674e-7762-4748-896c-1e0b2dd9fbfc/extract/0.log" Feb 19 13:53:37 crc kubenswrapper[4833]: I0219 13:53:37.538313 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-ztqm7_dd33e5e9-4983-4954-966e-a693cc5c299b/manager/0.log" Feb 19 13:53:37 crc kubenswrapper[4833]: I0219 13:53:37.832756 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-hq98q_c960bafe-e1ce-4635-a849-758a84db3b0e/manager/0.log" Feb 19 13:53:38 crc kubenswrapper[4833]: I0219 13:53:38.061973 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-wthd9_70feab77-0665-499a-b6e2-b35b95384ab7/manager/0.log" Feb 19 13:53:38 crc kubenswrapper[4833]: I0219 13:53:38.537936 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-ng9mx_0168dd3a-5296-440d-8b46-d858da1cfeb6/manager/0.log" Feb 19 13:53:38 crc kubenswrapper[4833]: I0219 13:53:38.965220 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-7d2vx_eed2b359-6b1f-4db4-947a-6ed3bf4385cc/manager/0.log" Feb 19 13:53:39 crc kubenswrapper[4833]: I0219 13:53:39.144618 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-cvzzp_ab70788d-b168-497b-bea0-4847ee80ce73/manager/0.log" Feb 19 13:53:39 crc kubenswrapper[4833]: I0219 13:53:39.307679 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-hn9cd_1ba8dd89-0865-4766-b216-b906d4d6f77a/manager/0.log" Feb 19 13:53:39 crc kubenswrapper[4833]: I0219 13:53:39.396200 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-8wsws_a09fe0a0-c328-4306-b1de-c8bddc00378f/manager/0.log" Feb 19 13:53:39 crc kubenswrapper[4833]: I0219 13:53:39.530355 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-njc9d_a14096df-2211-4053-afb4-ad8d68ff0723/manager/0.log" Feb 19 13:53:39 crc kubenswrapper[4833]: I0219 13:53:39.659789 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-7n7vf_e6de77c2-2965-48a3-a79a-75539ca32b8b/manager/0.log" Feb 19 13:53:39 crc kubenswrapper[4833]: I0219 13:53:39.860980 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-hprlt_eaf010f7-5113-4970-b963-682d17243fc9/manager/0.log" Feb 19 13:53:39 crc kubenswrapper[4833]: I0219 13:53:39.982380 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-w4n4c_8df5aecb-140d-4845-b07c-ab75586e4b54/manager/0.log" Feb 19 13:53:40 crc kubenswrapper[4833]: I0219 13:53:40.265764 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c5k7gz_a8783b50-8a5e-4c9f-8f4b-513e4e0c7122/manager/0.log" Feb 19 13:53:40 crc kubenswrapper[4833]: I0219 13:53:40.322540 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:53:40 crc kubenswrapper[4833]: E0219 13:53:40.322879 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:53:40 crc kubenswrapper[4833]: I0219 13:53:40.675380 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-568f98c69-t2vv5_59087e2d-5038-44cd-ab4d-1d1340e51c75/operator/0.log" Feb 19 13:53:40 crc kubenswrapper[4833]: I0219 13:53:40.833214 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8gfsk_8f9a2baf-c1a1-48a6-baa9-e73ad2dcac6e/registry-server/0.log" Feb 19 13:53:41 crc kubenswrapper[4833]: I0219 13:53:41.087522 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-bbl2n_eddad8e9-ebc8-4772-9b30-76fc7bd09919/manager/0.log" Feb 19 13:53:41 crc kubenswrapper[4833]: I0219 13:53:41.294250 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-hrnmv_3a2db5f5-bbec-4673-b32b-eef31c488a12/manager/0.log" Feb 19 13:53:41 crc kubenswrapper[4833]: I0219 13:53:41.532871 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zwzkq_1e12420e-fd8b-4ef2-bc12-9b3be0efa58a/operator/0.log" Feb 19 13:53:41 crc kubenswrapper[4833]: I0219 13:53:41.757454 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-cn5hb_636db3e6-7c84-4f25-896e-e3a542bdff19/manager/0.log" Feb 19 13:53:42 crc kubenswrapper[4833]: I0219 13:53:42.075996 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7f45b4ff68-lw9pq_e0e6cafc-957b-4ebd-ad08-1bef03debe49/manager/0.log" Feb 19 13:53:42 crc kubenswrapper[4833]: I0219 13:53:42.207535 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-cn6cv_84b0c5a7-e111-4ee9-999b-5da00d00ffd0/manager/0.log" Feb 19 13:53:42 crc kubenswrapper[4833]: I0219 13:53:42.224907 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-744c6f7bcc-jsmlm_81d2c5dc-91fd-4135-8408-104fc7badb60/manager/0.log" Feb 19 13:53:42 crc kubenswrapper[4833]: I0219 13:53:42.439312 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-24cxm_d7b1ebb3-ea0b-4e2f-b27a-e77abee17693/manager/0.log" Feb 19 13:53:42 crc kubenswrapper[4833]: I0219 13:53:42.460888 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-7z4m9_4deeacce-2501-4276-98cf-cb615e0b4dce/manager/0.log" Feb 19 13:53:47 crc kubenswrapper[4833]: I0219 13:53:47.410661 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-g59gc_4b94f9da-5e45-4428-a709-24574552d77e/manager/0.log" Feb 19 13:53:53 crc kubenswrapper[4833]: I0219 13:53:53.315636 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:53:53 crc kubenswrapper[4833]: E0219 13:53:53.316580 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:54:05 crc kubenswrapper[4833]: I0219 13:54:05.243334 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4gj46_ea6cc7f7-b2fa-40d4-93cd-795a01861ecb/control-plane-machine-set-operator/0.log" Feb 19 13:54:05 crc kubenswrapper[4833]: I0219 13:54:05.400165 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lmrs2_8a863328-15b8-46bc-9ffd-faa97add46ea/kube-rbac-proxy/0.log" Feb 19 13:54:05 crc kubenswrapper[4833]: I0219 13:54:05.448178 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lmrs2_8a863328-15b8-46bc-9ffd-faa97add46ea/machine-api-operator/0.log" Feb 19 13:54:08 crc kubenswrapper[4833]: I0219 13:54:08.316053 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:54:08 crc kubenswrapper[4833]: E0219 13:54:08.316927 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:54:19 crc kubenswrapper[4833]: I0219 13:54:19.315622 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:54:19 crc kubenswrapper[4833]: E0219 13:54:19.316801 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:54:19 crc kubenswrapper[4833]: I0219 13:54:19.870745 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rpjfw_96ec41c8-cde8-48b8-99ac-9b56a2e86761/cert-manager-cainjector/0.log" Feb 19 13:54:19 crc kubenswrapper[4833]: I0219 13:54:19.921148 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-2zxnh_0ec64263-cb6a-407b-9ef7-a06af9f1df98/cert-manager-controller/0.log" Feb 19 13:54:20 crc kubenswrapper[4833]: I0219 13:54:20.062912 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-t7gnv_bba14386-87d4-4397-9ba7-beaafe4c15de/cert-manager-webhook/0.log" Feb 19 13:54:31 crc kubenswrapper[4833]: I0219 13:54:31.316076 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:54:31 crc kubenswrapper[4833]: E0219 13:54:31.316953 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:54:33 crc kubenswrapper[4833]: I0219 13:54:33.875760 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-dd2ct_61daa4d0-c750-45c0-83b1-99ec44ba8842/nmstate-console-plugin/0.log" Feb 19 13:54:34 crc kubenswrapper[4833]: I0219 13:54:34.059857 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-pb67x_d5e6d19d-fb8f-4313-bd78-d5f82fa79e40/nmstate-handler/0.log" Feb 19 13:54:34 crc kubenswrapper[4833]: I0219 13:54:34.130809 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4szmh_ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed/nmstate-metrics/0.log" Feb 19 13:54:34 crc kubenswrapper[4833]: I0219 13:54:34.132108 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-4szmh_ae4fac4c-baa2-4e07-aa2a-e1fa2f28aeed/kube-rbac-proxy/0.log" Feb 19 13:54:34 crc kubenswrapper[4833]: I0219 13:54:34.302732 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-sqwkx_f99e9621-3d59-431a-874e-0ecb2370cda1/nmstate-operator/0.log" Feb 19 13:54:34 crc kubenswrapper[4833]: I0219 13:54:34.332137 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-x2ld2_336bca49-9b02-4814-a710-f133cc1d3e46/nmstate-webhook/0.log" Feb 19 13:54:44 crc kubenswrapper[4833]: I0219 13:54:44.316683 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:54:44 crc kubenswrapper[4833]: E0219 13:54:44.319536 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 13:54:56 crc kubenswrapper[4833]: I0219 13:54:56.316184 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:54:57 crc kubenswrapper[4833]: I0219 13:54:57.454664 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"591c0b25dd56124bace57f3410df3ed7c130b33cb8b1ea78fd95240b008644a4"} Feb 19 13:55:03 crc kubenswrapper[4833]: I0219 13:55:03.354022 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7hv4q_810d0dc6-4fd1-4c62-838b-f759e361ea26/kube-rbac-proxy/0.log" Feb 19 13:55:03 crc kubenswrapper[4833]: I0219 13:55:03.431955 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-7hv4q_810d0dc6-4fd1-4c62-838b-f759e361ea26/controller/0.log" Feb 19 13:55:03 crc kubenswrapper[4833]: I0219 13:55:03.638575 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-frr-files/0.log" Feb 19 13:55:03 crc kubenswrapper[4833]: I0219 13:55:03.789822 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-reloader/0.log" Feb 19 13:55:03 crc kubenswrapper[4833]: I0219 13:55:03.795303 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-frr-files/0.log" Feb 19 13:55:03 crc kubenswrapper[4833]: I0219 13:55:03.824006 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-metrics/0.log" Feb 19 13:55:03 crc kubenswrapper[4833]: I0219 13:55:03.876071 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-reloader/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.034629 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-frr-files/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.037998 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-metrics/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.040469 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-reloader/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.077576 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-metrics/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.245181 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-metrics/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.248970 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/controller/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.255743 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-frr-files/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.265756 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/cp-reloader/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.452760 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/kube-rbac-proxy/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.462159 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/frr-metrics/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.464107 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/kube-rbac-proxy-frr/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.645846 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/reloader/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.737269 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-69h4r_62869774-530a-477d-bac0-df5e4fba9daa/frr-k8s-webhook-server/0.log" Feb 19 13:55:04 crc kubenswrapper[4833]: I0219 13:55:04.938971 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-595bc44cf4-flp9d_84aafe4e-e69c-4cdb-8987-71eb568e3c6b/manager/0.log" Feb 19 13:55:05 crc kubenswrapper[4833]: I0219 13:55:05.648096 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-55ff9b8c6-prv8k_89d5e852-a20e-4eb4-a37a-6ecdbaf05484/webhook-server/0.log" Feb 19 13:55:05 crc kubenswrapper[4833]: I0219 13:55:05.650904 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5dn4l_baf68531-b18e-4d82-9787-08a0c9381707/frr/0.log" Feb 19 13:55:05 crc kubenswrapper[4833]: I0219 13:55:05.684560 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9tx4p_e5afb617-4d1b-4c96-a669-f669e870501f/kube-rbac-proxy/0.log" Feb 19 13:55:06 crc kubenswrapper[4833]: I0219 13:55:06.095485 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-9tx4p_e5afb617-4d1b-4c96-a669-f669e870501f/speaker/0.log" Feb 19 13:55:20 crc kubenswrapper[4833]: I0219 13:55:20.373903 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/util/0.log" Feb 19 13:55:20 crc kubenswrapper[4833]: I0219 13:55:20.547549 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/pull/0.log" Feb 19 13:55:20 crc kubenswrapper[4833]: I0219 13:55:20.565509 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/util/0.log" Feb 19 13:55:20 crc kubenswrapper[4833]: I0219 13:55:20.577507 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/pull/0.log" Feb 19 13:55:20 crc kubenswrapper[4833]: I0219 13:55:20.733335 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/util/0.log" Feb 19 13:55:20 crc kubenswrapper[4833]: I0219 13:55:20.745486 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/pull/0.log" Feb 19 13:55:20 crc kubenswrapper[4833]: I0219 13:55:20.788386 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2134blsx_29dbbeed-6768-4f91-a9ab-ad93f33f9896/extract/0.log" Feb 19 13:55:20 crc kubenswrapper[4833]: I0219 13:55:20.905465 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-utilities/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.076884 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-utilities/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.084965 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-content/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.154487 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-content/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.251780 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-content/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.268519 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/extract-utilities/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.502814 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-utilities/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.705031 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-content/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.765331 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-content/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.777620 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-utilities/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.797932 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7cc2n_d41b2c7c-9ca2-46f4-91b0-8b7cf419d5c4/registry-server/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.910789 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-utilities/0.log" Feb 19 13:55:21 crc kubenswrapper[4833]: I0219 13:55:21.922402 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/extract-content/0.log" Feb 19 13:55:22 crc kubenswrapper[4833]: I0219 13:55:22.090091 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/util/0.log" Feb 19 13:55:22 crc kubenswrapper[4833]: I0219 13:55:22.397207 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/pull/0.log" Feb 19 13:55:22 crc kubenswrapper[4833]: I0219 13:55:22.400276 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/util/0.log" Feb 19 13:55:22 crc kubenswrapper[4833]: I0219 13:55:22.426233 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/pull/0.log" Feb 19 13:55:22 crc kubenswrapper[4833]: I0219 13:55:22.635244 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/pull/0.log" Feb 19 13:55:22 crc kubenswrapper[4833]: I0219 13:55:22.649663 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/util/0.log" Feb 19 13:55:22 crc kubenswrapper[4833]: I0219 13:55:22.690216 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca86shx_a61c47e0-c103-451d-802c-fbdebf10dbd9/extract/0.log" Feb 19 13:55:22 crc kubenswrapper[4833]: I0219 13:55:22.755841 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-mkws7_6237e030-4362-477a-a4dc-b18cbfa467fe/registry-server/0.log" Feb 19 13:55:22 crc kubenswrapper[4833]: I0219 13:55:22.819850 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-c78pj_29007976-47bd-4251-8d1e-043d4c87270d/marketplace-operator/0.log" Feb 19 13:55:22 crc kubenswrapper[4833]: I0219 13:55:22.923838 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-utilities/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.120908 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-utilities/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.122289 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-content/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.184996 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-content/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.320103 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-content/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.333692 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/extract-utilities/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.460603 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4vk2l_c4f33a97-ce68-43a9-a79b-df50f34c1f96/registry-server/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.512812 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-utilities/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.686025 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-utilities/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.699385 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-content/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.714440 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-content/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.935509 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-utilities/0.log" Feb 19 13:55:23 crc kubenswrapper[4833]: I0219 13:55:23.945958 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/extract-content/0.log" Feb 19 13:55:24 crc kubenswrapper[4833]: I0219 13:55:24.551067 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kxdnw_845b40fa-5ca5-47fd-bf13-3b84c9951be6/registry-server/0.log" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.559922 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ppk68"] Feb 19 13:56:23 crc kubenswrapper[4833]: E0219 13:56:23.563299 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a447734-b60e-4ae3-bc4f-5534ab5d8afb" containerName="container-00" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.563345 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a447734-b60e-4ae3-bc4f-5534ab5d8afb" containerName="container-00" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.563875 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a447734-b60e-4ae3-bc4f-5534ab5d8afb" containerName="container-00" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.568145 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.576815 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ppk68"] Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.668069 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn7pr\" (UniqueName: \"kubernetes.io/projected/60c177d1-3e68-4afb-835f-1a0fc30f0c40-kube-api-access-dn7pr\") pod \"redhat-operators-ppk68\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.668179 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-utilities\") pod \"redhat-operators-ppk68\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.668208 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-catalog-content\") pod \"redhat-operators-ppk68\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.769722 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn7pr\" (UniqueName: \"kubernetes.io/projected/60c177d1-3e68-4afb-835f-1a0fc30f0c40-kube-api-access-dn7pr\") pod \"redhat-operators-ppk68\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.769803 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-utilities\") pod \"redhat-operators-ppk68\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.769831 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-catalog-content\") pod \"redhat-operators-ppk68\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.770349 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-catalog-content\") pod \"redhat-operators-ppk68\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.770964 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-utilities\") pod \"redhat-operators-ppk68\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.793907 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn7pr\" (UniqueName: \"kubernetes.io/projected/60c177d1-3e68-4afb-835f-1a0fc30f0c40-kube-api-access-dn7pr\") pod \"redhat-operators-ppk68\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:23 crc kubenswrapper[4833]: I0219 13:56:23.941345 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:24 crc kubenswrapper[4833]: I0219 13:56:24.459575 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ppk68"] Feb 19 13:56:25 crc kubenswrapper[4833]: I0219 13:56:25.326104 4833 generic.go:334] "Generic (PLEG): container finished" podID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerID="735d263da72e5d5161933227bf1b046dc764c38738d43f5ae28498ebeff294fa" exitCode=0 Feb 19 13:56:25 crc kubenswrapper[4833]: I0219 13:56:25.326403 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppk68" event={"ID":"60c177d1-3e68-4afb-835f-1a0fc30f0c40","Type":"ContainerDied","Data":"735d263da72e5d5161933227bf1b046dc764c38738d43f5ae28498ebeff294fa"} Feb 19 13:56:25 crc kubenswrapper[4833]: I0219 13:56:25.326437 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppk68" event={"ID":"60c177d1-3e68-4afb-835f-1a0fc30f0c40","Type":"ContainerStarted","Data":"8d8f74b9edfff29e29f74b82107018c1acf7bff3a397124b3ed2d21c03b9640a"} Feb 19 13:56:25 crc kubenswrapper[4833]: I0219 13:56:25.329427 4833 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 13:56:26 crc kubenswrapper[4833]: I0219 13:56:26.337295 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppk68" event={"ID":"60c177d1-3e68-4afb-835f-1a0fc30f0c40","Type":"ContainerStarted","Data":"64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6"} Feb 19 13:56:27 crc kubenswrapper[4833]: I0219 13:56:27.349765 4833 generic.go:334] "Generic (PLEG): container finished" podID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerID="64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6" exitCode=0 Feb 19 13:56:27 crc kubenswrapper[4833]: I0219 13:56:27.349813 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppk68" event={"ID":"60c177d1-3e68-4afb-835f-1a0fc30f0c40","Type":"ContainerDied","Data":"64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6"} Feb 19 13:56:28 crc kubenswrapper[4833]: I0219 13:56:28.365940 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppk68" event={"ID":"60c177d1-3e68-4afb-835f-1a0fc30f0c40","Type":"ContainerStarted","Data":"ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267"} Feb 19 13:56:28 crc kubenswrapper[4833]: I0219 13:56:28.401639 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ppk68" podStartSLOduration=3.005935752 podStartE2EDuration="5.401613975s" podCreationTimestamp="2026-02-19 13:56:23 +0000 UTC" firstStartedPulling="2026-02-19 13:56:25.329157365 +0000 UTC m=+4195.724676133" lastFinishedPulling="2026-02-19 13:56:27.724835588 +0000 UTC m=+4198.120354356" observedRunningTime="2026-02-19 13:56:28.396837049 +0000 UTC m=+4198.792355837" watchObservedRunningTime="2026-02-19 13:56:28.401613975 +0000 UTC m=+4198.797132753" Feb 19 13:56:33 crc kubenswrapper[4833]: I0219 13:56:33.943237 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:33 crc kubenswrapper[4833]: I0219 13:56:33.943685 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:35 crc kubenswrapper[4833]: I0219 13:56:35.026804 4833 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ppk68" podUID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerName="registry-server" probeResult="failure" output=< Feb 19 13:56:35 crc kubenswrapper[4833]: timeout: failed to connect service ":50051" within 1s Feb 19 13:56:35 crc kubenswrapper[4833]: > Feb 19 13:56:44 crc kubenswrapper[4833]: I0219 13:56:44.008384 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:44 crc kubenswrapper[4833]: I0219 13:56:44.102976 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:44 crc kubenswrapper[4833]: I0219 13:56:44.267792 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ppk68"] Feb 19 13:56:45 crc kubenswrapper[4833]: I0219 13:56:45.592713 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ppk68" podUID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerName="registry-server" containerID="cri-o://ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267" gracePeriod=2 Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.515090 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.552429 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-catalog-content\") pod \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.552795 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn7pr\" (UniqueName: \"kubernetes.io/projected/60c177d1-3e68-4afb-835f-1a0fc30f0c40-kube-api-access-dn7pr\") pod \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.552876 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-utilities\") pod \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\" (UID: \"60c177d1-3e68-4afb-835f-1a0fc30f0c40\") " Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.554320 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-utilities" (OuterVolumeSpecName: "utilities") pod "60c177d1-3e68-4afb-835f-1a0fc30f0c40" (UID: "60c177d1-3e68-4afb-835f-1a0fc30f0c40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.561676 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60c177d1-3e68-4afb-835f-1a0fc30f0c40-kube-api-access-dn7pr" (OuterVolumeSpecName: "kube-api-access-dn7pr") pod "60c177d1-3e68-4afb-835f-1a0fc30f0c40" (UID: "60c177d1-3e68-4afb-835f-1a0fc30f0c40"). InnerVolumeSpecName "kube-api-access-dn7pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.604331 4833 generic.go:334] "Generic (PLEG): container finished" podID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerID="ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267" exitCode=0 Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.605779 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppk68" event={"ID":"60c177d1-3e68-4afb-835f-1a0fc30f0c40","Type":"ContainerDied","Data":"ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267"} Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.605957 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ppk68" event={"ID":"60c177d1-3e68-4afb-835f-1a0fc30f0c40","Type":"ContainerDied","Data":"8d8f74b9edfff29e29f74b82107018c1acf7bff3a397124b3ed2d21c03b9640a"} Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.606104 4833 scope.go:117] "RemoveContainer" containerID="ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.606402 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ppk68" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.629631 4833 scope.go:117] "RemoveContainer" containerID="64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.649644 4833 scope.go:117] "RemoveContainer" containerID="735d263da72e5d5161933227bf1b046dc764c38738d43f5ae28498ebeff294fa" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.658897 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn7pr\" (UniqueName: \"kubernetes.io/projected/60c177d1-3e68-4afb-835f-1a0fc30f0c40-kube-api-access-dn7pr\") on node \"crc\" DevicePath \"\"" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.659118 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.682953 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60c177d1-3e68-4afb-835f-1a0fc30f0c40" (UID: "60c177d1-3e68-4afb-835f-1a0fc30f0c40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.699715 4833 scope.go:117] "RemoveContainer" containerID="ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267" Feb 19 13:56:46 crc kubenswrapper[4833]: E0219 13:56:46.700214 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267\": container with ID starting with ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267 not found: ID does not exist" containerID="ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.700278 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267"} err="failed to get container status \"ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267\": rpc error: code = NotFound desc = could not find container \"ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267\": container with ID starting with ed5b5eacb5c35c8ddbf365ebef552136204892678eb6080a5326b16a6dd5a267 not found: ID does not exist" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.700315 4833 scope.go:117] "RemoveContainer" containerID="64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6" Feb 19 13:56:46 crc kubenswrapper[4833]: E0219 13:56:46.700724 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6\": container with ID starting with 64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6 not found: ID does not exist" containerID="64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.700750 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6"} err="failed to get container status \"64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6\": rpc error: code = NotFound desc = could not find container \"64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6\": container with ID starting with 64eb7c5623c08a12ecbac9d9f67226a067bd4c7e3676ccc6a0e1a3bf50dd96f6 not found: ID does not exist" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.700766 4833 scope.go:117] "RemoveContainer" containerID="735d263da72e5d5161933227bf1b046dc764c38738d43f5ae28498ebeff294fa" Feb 19 13:56:46 crc kubenswrapper[4833]: E0219 13:56:46.700983 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"735d263da72e5d5161933227bf1b046dc764c38738d43f5ae28498ebeff294fa\": container with ID starting with 735d263da72e5d5161933227bf1b046dc764c38738d43f5ae28498ebeff294fa not found: ID does not exist" containerID="735d263da72e5d5161933227bf1b046dc764c38738d43f5ae28498ebeff294fa" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.701013 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"735d263da72e5d5161933227bf1b046dc764c38738d43f5ae28498ebeff294fa"} err="failed to get container status \"735d263da72e5d5161933227bf1b046dc764c38738d43f5ae28498ebeff294fa\": rpc error: code = NotFound desc = could not find container \"735d263da72e5d5161933227bf1b046dc764c38738d43f5ae28498ebeff294fa\": container with ID starting with 735d263da72e5d5161933227bf1b046dc764c38738d43f5ae28498ebeff294fa not found: ID does not exist" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.761334 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60c177d1-3e68-4afb-835f-1a0fc30f0c40-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.961969 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ppk68"] Feb 19 13:56:46 crc kubenswrapper[4833]: I0219 13:56:46.977656 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ppk68"] Feb 19 13:56:48 crc kubenswrapper[4833]: I0219 13:56:48.334960 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" path="/var/lib/kubelet/pods/60c177d1-3e68-4afb-835f-1a0fc30f0c40/volumes" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.494104 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b5h8t"] Feb 19 13:56:53 crc kubenswrapper[4833]: E0219 13:56:53.495524 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerName="extract-utilities" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.495557 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerName="extract-utilities" Feb 19 13:56:53 crc kubenswrapper[4833]: E0219 13:56:53.495588 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerName="extract-content" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.495602 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerName="extract-content" Feb 19 13:56:53 crc kubenswrapper[4833]: E0219 13:56:53.495634 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerName="registry-server" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.495647 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerName="registry-server" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.496015 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="60c177d1-3e68-4afb-835f-1a0fc30f0c40" containerName="registry-server" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.501627 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.527438 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5h8t"] Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.616061 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm6bn\" (UniqueName: \"kubernetes.io/projected/1ebaf71b-7f29-461f-b646-7ef093ea6665-kube-api-access-tm6bn\") pod \"certified-operators-b5h8t\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.616446 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-utilities\") pod \"certified-operators-b5h8t\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.616608 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-catalog-content\") pod \"certified-operators-b5h8t\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.719021 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-utilities\") pod \"certified-operators-b5h8t\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.719143 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-catalog-content\") pod \"certified-operators-b5h8t\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.719360 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm6bn\" (UniqueName: \"kubernetes.io/projected/1ebaf71b-7f29-461f-b646-7ef093ea6665-kube-api-access-tm6bn\") pod \"certified-operators-b5h8t\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.722788 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-utilities\") pod \"certified-operators-b5h8t\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.723240 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-catalog-content\") pod \"certified-operators-b5h8t\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.771430 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm6bn\" (UniqueName: \"kubernetes.io/projected/1ebaf71b-7f29-461f-b646-7ef093ea6665-kube-api-access-tm6bn\") pod \"certified-operators-b5h8t\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:56:53 crc kubenswrapper[4833]: I0219 13:56:53.847163 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:56:54 crc kubenswrapper[4833]: I0219 13:56:54.378386 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5h8t"] Feb 19 13:56:54 crc kubenswrapper[4833]: I0219 13:56:54.698560 4833 generic.go:334] "Generic (PLEG): container finished" podID="1ebaf71b-7f29-461f-b646-7ef093ea6665" containerID="69f2e3b6967948666812e8ad5dd4ac80bb655ca354f81cd38bf9933c389251d1" exitCode=0 Feb 19 13:56:54 crc kubenswrapper[4833]: I0219 13:56:54.698757 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5h8t" event={"ID":"1ebaf71b-7f29-461f-b646-7ef093ea6665","Type":"ContainerDied","Data":"69f2e3b6967948666812e8ad5dd4ac80bb655ca354f81cd38bf9933c389251d1"} Feb 19 13:56:54 crc kubenswrapper[4833]: I0219 13:56:54.699044 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5h8t" event={"ID":"1ebaf71b-7f29-461f-b646-7ef093ea6665","Type":"ContainerStarted","Data":"2aa9030b1c96d6f055979913367b9b1f363bf342210b3350a98fdefac70a4557"} Feb 19 13:56:56 crc kubenswrapper[4833]: I0219 13:56:56.720608 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5h8t" event={"ID":"1ebaf71b-7f29-461f-b646-7ef093ea6665","Type":"ContainerStarted","Data":"31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec"} Feb 19 13:56:57 crc kubenswrapper[4833]: I0219 13:56:57.731276 4833 generic.go:334] "Generic (PLEG): container finished" podID="1ebaf71b-7f29-461f-b646-7ef093ea6665" containerID="31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec" exitCode=0 Feb 19 13:56:57 crc kubenswrapper[4833]: I0219 13:56:57.731316 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5h8t" event={"ID":"1ebaf71b-7f29-461f-b646-7ef093ea6665","Type":"ContainerDied","Data":"31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec"} Feb 19 13:56:58 crc kubenswrapper[4833]: I0219 13:56:58.747319 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5h8t" event={"ID":"1ebaf71b-7f29-461f-b646-7ef093ea6665","Type":"ContainerStarted","Data":"5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e"} Feb 19 13:56:58 crc kubenswrapper[4833]: I0219 13:56:58.777004 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b5h8t" podStartSLOduration=2.311102331 podStartE2EDuration="5.776976557s" podCreationTimestamp="2026-02-19 13:56:53 +0000 UTC" firstStartedPulling="2026-02-19 13:56:54.700804029 +0000 UTC m=+4225.096322837" lastFinishedPulling="2026-02-19 13:56:58.166678255 +0000 UTC m=+4228.562197063" observedRunningTime="2026-02-19 13:56:58.76878474 +0000 UTC m=+4229.164303518" watchObservedRunningTime="2026-02-19 13:56:58.776976557 +0000 UTC m=+4229.172495365" Feb 19 13:57:03 crc kubenswrapper[4833]: I0219 13:57:03.848352 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:57:03 crc kubenswrapper[4833]: I0219 13:57:03.850078 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:57:03 crc kubenswrapper[4833]: I0219 13:57:03.935168 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:57:04 crc kubenswrapper[4833]: I0219 13:57:04.930231 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:57:05 crc kubenswrapper[4833]: I0219 13:57:05.022781 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5h8t"] Feb 19 13:57:06 crc kubenswrapper[4833]: I0219 13:57:06.866780 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b5h8t" podUID="1ebaf71b-7f29-461f-b646-7ef093ea6665" containerName="registry-server" containerID="cri-o://5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e" gracePeriod=2 Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.838627 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.877939 4833 generic.go:334] "Generic (PLEG): container finished" podID="1ebaf71b-7f29-461f-b646-7ef093ea6665" containerID="5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e" exitCode=0 Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.877982 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5h8t" event={"ID":"1ebaf71b-7f29-461f-b646-7ef093ea6665","Type":"ContainerDied","Data":"5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e"} Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.878007 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5h8t" event={"ID":"1ebaf71b-7f29-461f-b646-7ef093ea6665","Type":"ContainerDied","Data":"2aa9030b1c96d6f055979913367b9b1f363bf342210b3350a98fdefac70a4557"} Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.878013 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5h8t" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.878022 4833 scope.go:117] "RemoveContainer" containerID="5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.897088 4833 scope.go:117] "RemoveContainer" containerID="31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.927479 4833 scope.go:117] "RemoveContainer" containerID="69f2e3b6967948666812e8ad5dd4ac80bb655ca354f81cd38bf9933c389251d1" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.973666 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm6bn\" (UniqueName: \"kubernetes.io/projected/1ebaf71b-7f29-461f-b646-7ef093ea6665-kube-api-access-tm6bn\") pod \"1ebaf71b-7f29-461f-b646-7ef093ea6665\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.973807 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-utilities\") pod \"1ebaf71b-7f29-461f-b646-7ef093ea6665\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.973997 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-catalog-content\") pod \"1ebaf71b-7f29-461f-b646-7ef093ea6665\" (UID: \"1ebaf71b-7f29-461f-b646-7ef093ea6665\") " Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.975986 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-utilities" (OuterVolumeSpecName: "utilities") pod "1ebaf71b-7f29-461f-b646-7ef093ea6665" (UID: "1ebaf71b-7f29-461f-b646-7ef093ea6665"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.982653 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ebaf71b-7f29-461f-b646-7ef093ea6665-kube-api-access-tm6bn" (OuterVolumeSpecName: "kube-api-access-tm6bn") pod "1ebaf71b-7f29-461f-b646-7ef093ea6665" (UID: "1ebaf71b-7f29-461f-b646-7ef093ea6665"). InnerVolumeSpecName "kube-api-access-tm6bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.991461 4833 scope.go:117] "RemoveContainer" containerID="5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e" Feb 19 13:57:07 crc kubenswrapper[4833]: E0219 13:57:07.992151 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e\": container with ID starting with 5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e not found: ID does not exist" containerID="5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.992219 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e"} err="failed to get container status \"5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e\": rpc error: code = NotFound desc = could not find container \"5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e\": container with ID starting with 5f797bf8fa5021b79677262adaeb6927882f207005ac57e0442bc01c4e91ed3e not found: ID does not exist" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.992273 4833 scope.go:117] "RemoveContainer" containerID="31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec" Feb 19 13:57:07 crc kubenswrapper[4833]: E0219 13:57:07.992990 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec\": container with ID starting with 31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec not found: ID does not exist" containerID="31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.993050 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec"} err="failed to get container status \"31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec\": rpc error: code = NotFound desc = could not find container \"31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec\": container with ID starting with 31fb932d0b4c2d270f03b0c86fdc7b33c99fe848075803d2ec2f2ad46718afec not found: ID does not exist" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.993092 4833 scope.go:117] "RemoveContainer" containerID="69f2e3b6967948666812e8ad5dd4ac80bb655ca354f81cd38bf9933c389251d1" Feb 19 13:57:07 crc kubenswrapper[4833]: E0219 13:57:07.993619 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f2e3b6967948666812e8ad5dd4ac80bb655ca354f81cd38bf9933c389251d1\": container with ID starting with 69f2e3b6967948666812e8ad5dd4ac80bb655ca354f81cd38bf9933c389251d1 not found: ID does not exist" containerID="69f2e3b6967948666812e8ad5dd4ac80bb655ca354f81cd38bf9933c389251d1" Feb 19 13:57:07 crc kubenswrapper[4833]: I0219 13:57:07.993666 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f2e3b6967948666812e8ad5dd4ac80bb655ca354f81cd38bf9933c389251d1"} err="failed to get container status \"69f2e3b6967948666812e8ad5dd4ac80bb655ca354f81cd38bf9933c389251d1\": rpc error: code = NotFound desc = could not find container \"69f2e3b6967948666812e8ad5dd4ac80bb655ca354f81cd38bf9933c389251d1\": container with ID starting with 69f2e3b6967948666812e8ad5dd4ac80bb655ca354f81cd38bf9933c389251d1 not found: ID does not exist" Feb 19 13:57:08 crc kubenswrapper[4833]: I0219 13:57:08.020972 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ebaf71b-7f29-461f-b646-7ef093ea6665" (UID: "1ebaf71b-7f29-461f-b646-7ef093ea6665"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:57:08 crc kubenswrapper[4833]: I0219 13:57:08.076786 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:57:08 crc kubenswrapper[4833]: I0219 13:57:08.076827 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm6bn\" (UniqueName: \"kubernetes.io/projected/1ebaf71b-7f29-461f-b646-7ef093ea6665-kube-api-access-tm6bn\") on node \"crc\" DevicePath \"\"" Feb 19 13:57:08 crc kubenswrapper[4833]: I0219 13:57:08.076841 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebaf71b-7f29-461f-b646-7ef093ea6665-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:57:08 crc kubenswrapper[4833]: I0219 13:57:08.249766 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5h8t"] Feb 19 13:57:08 crc kubenswrapper[4833]: I0219 13:57:08.263077 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b5h8t"] Feb 19 13:57:08 crc kubenswrapper[4833]: I0219 13:57:08.354203 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ebaf71b-7f29-461f-b646-7ef093ea6665" path="/var/lib/kubelet/pods/1ebaf71b-7f29-461f-b646-7ef093ea6665/volumes" Feb 19 13:57:09 crc kubenswrapper[4833]: I0219 13:57:09.903091 4833 generic.go:334] "Generic (PLEG): container finished" podID="f637b54a-2e35-4f05-a5cf-204c9cc9154a" containerID="c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f" exitCode=0 Feb 19 13:57:09 crc kubenswrapper[4833]: I0219 13:57:09.903210 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8wn8p/must-gather-q2nz7" event={"ID":"f637b54a-2e35-4f05-a5cf-204c9cc9154a","Type":"ContainerDied","Data":"c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f"} Feb 19 13:57:09 crc kubenswrapper[4833]: I0219 13:57:09.904020 4833 scope.go:117] "RemoveContainer" containerID="c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f" Feb 19 13:57:10 crc kubenswrapper[4833]: I0219 13:57:10.024344 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8wn8p_must-gather-q2nz7_f637b54a-2e35-4f05-a5cf-204c9cc9154a/gather/0.log" Feb 19 13:57:15 crc kubenswrapper[4833]: I0219 13:57:15.745021 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:57:15 crc kubenswrapper[4833]: I0219 13:57:15.745872 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:57:20 crc kubenswrapper[4833]: I0219 13:57:20.334770 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8wn8p/must-gather-q2nz7"] Feb 19 13:57:20 crc kubenswrapper[4833]: I0219 13:57:20.335370 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8wn8p/must-gather-q2nz7"] Feb 19 13:57:20 crc kubenswrapper[4833]: I0219 13:57:20.335673 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8wn8p/must-gather-q2nz7" podUID="f637b54a-2e35-4f05-a5cf-204c9cc9154a" containerName="copy" containerID="cri-o://9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459" gracePeriod=2 Feb 19 13:57:21 crc kubenswrapper[4833]: I0219 13:57:21.413147 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8wn8p_must-gather-q2nz7_f637b54a-2e35-4f05-a5cf-204c9cc9154a/copy/0.log" Feb 19 13:57:21 crc kubenswrapper[4833]: I0219 13:57:21.414092 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/must-gather-q2nz7" Feb 19 13:57:21 crc kubenswrapper[4833]: I0219 13:57:21.480735 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr5sl\" (UniqueName: \"kubernetes.io/projected/f637b54a-2e35-4f05-a5cf-204c9cc9154a-kube-api-access-fr5sl\") pod \"f637b54a-2e35-4f05-a5cf-204c9cc9154a\" (UID: \"f637b54a-2e35-4f05-a5cf-204c9cc9154a\") " Feb 19 13:57:21 crc kubenswrapper[4833]: I0219 13:57:21.480805 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f637b54a-2e35-4f05-a5cf-204c9cc9154a-must-gather-output\") pod \"f637b54a-2e35-4f05-a5cf-204c9cc9154a\" (UID: \"f637b54a-2e35-4f05-a5cf-204c9cc9154a\") " Feb 19 13:57:21 crc kubenswrapper[4833]: I0219 13:57:21.489634 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f637b54a-2e35-4f05-a5cf-204c9cc9154a-kube-api-access-fr5sl" (OuterVolumeSpecName: "kube-api-access-fr5sl") pod "f637b54a-2e35-4f05-a5cf-204c9cc9154a" (UID: "f637b54a-2e35-4f05-a5cf-204c9cc9154a"). InnerVolumeSpecName "kube-api-access-fr5sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:57:21 crc kubenswrapper[4833]: I0219 13:57:21.582848 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr5sl\" (UniqueName: \"kubernetes.io/projected/f637b54a-2e35-4f05-a5cf-204c9cc9154a-kube-api-access-fr5sl\") on node \"crc\" DevicePath \"\"" Feb 19 13:57:21 crc kubenswrapper[4833]: I0219 13:57:21.635262 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f637b54a-2e35-4f05-a5cf-204c9cc9154a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f637b54a-2e35-4f05-a5cf-204c9cc9154a" (UID: "f637b54a-2e35-4f05-a5cf-204c9cc9154a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:57:21 crc kubenswrapper[4833]: I0219 13:57:21.684448 4833 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f637b54a-2e35-4f05-a5cf-204c9cc9154a-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 13:57:22 crc kubenswrapper[4833]: I0219 13:57:22.026065 4833 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8wn8p_must-gather-q2nz7_f637b54a-2e35-4f05-a5cf-204c9cc9154a/copy/0.log" Feb 19 13:57:22 crc kubenswrapper[4833]: I0219 13:57:22.027491 4833 generic.go:334] "Generic (PLEG): container finished" podID="f637b54a-2e35-4f05-a5cf-204c9cc9154a" containerID="9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459" exitCode=143 Feb 19 13:57:22 crc kubenswrapper[4833]: I0219 13:57:22.027608 4833 scope.go:117] "RemoveContainer" containerID="9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459" Feb 19 13:57:22 crc kubenswrapper[4833]: I0219 13:57:22.027720 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8wn8p/must-gather-q2nz7" Feb 19 13:57:22 crc kubenswrapper[4833]: I0219 13:57:22.068404 4833 scope.go:117] "RemoveContainer" containerID="c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f" Feb 19 13:57:22 crc kubenswrapper[4833]: I0219 13:57:22.135598 4833 scope.go:117] "RemoveContainer" containerID="9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459" Feb 19 13:57:22 crc kubenswrapper[4833]: E0219 13:57:22.136177 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459\": container with ID starting with 9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459 not found: ID does not exist" containerID="9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459" Feb 19 13:57:22 crc kubenswrapper[4833]: I0219 13:57:22.136391 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459"} err="failed to get container status \"9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459\": rpc error: code = NotFound desc = could not find container \"9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459\": container with ID starting with 9655f200b35fcc47002a3ba8346715d0cae96a739f0b96938dafcf2f7ae7b459 not found: ID does not exist" Feb 19 13:57:22 crc kubenswrapper[4833]: I0219 13:57:22.136638 4833 scope.go:117] "RemoveContainer" containerID="c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f" Feb 19 13:57:22 crc kubenswrapper[4833]: E0219 13:57:22.138276 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f\": container with ID starting with c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f not found: ID does not exist" containerID="c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f" Feb 19 13:57:22 crc kubenswrapper[4833]: I0219 13:57:22.138549 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f"} err="failed to get container status \"c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f\": rpc error: code = NotFound desc = could not find container \"c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f\": container with ID starting with c840db8b2ca2a95f08c224d21fc13f3ea0609d40cd201f9e4ae93ed06245961f not found: ID does not exist" Feb 19 13:57:22 crc kubenswrapper[4833]: I0219 13:57:22.326590 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f637b54a-2e35-4f05-a5cf-204c9cc9154a" path="/var/lib/kubelet/pods/f637b54a-2e35-4f05-a5cf-204c9cc9154a/volumes" Feb 19 13:57:45 crc kubenswrapper[4833]: I0219 13:57:45.744705 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:57:45 crc kubenswrapper[4833]: I0219 13:57:45.745316 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:58:15 crc kubenswrapper[4833]: I0219 13:58:15.744777 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 13:58:15 crc kubenswrapper[4833]: I0219 13:58:15.745780 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 13:58:15 crc kubenswrapper[4833]: I0219 13:58:15.745848 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 13:58:15 crc kubenswrapper[4833]: I0219 13:58:15.746660 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"591c0b25dd56124bace57f3410df3ed7c130b33cb8b1ea78fd95240b008644a4"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 13:58:15 crc kubenswrapper[4833]: I0219 13:58:15.746759 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://591c0b25dd56124bace57f3410df3ed7c130b33cb8b1ea78fd95240b008644a4" gracePeriod=600 Feb 19 13:58:16 crc kubenswrapper[4833]: I0219 13:58:16.616868 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="591c0b25dd56124bace57f3410df3ed7c130b33cb8b1ea78fd95240b008644a4" exitCode=0 Feb 19 13:58:16 crc kubenswrapper[4833]: I0219 13:58:16.617537 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"591c0b25dd56124bace57f3410df3ed7c130b33cb8b1ea78fd95240b008644a4"} Feb 19 13:58:16 crc kubenswrapper[4833]: I0219 13:58:16.617593 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerStarted","Data":"6c504a76ab972f68304dd5b4c14ea92d43763cacdcefa21ea2554701acbc166b"} Feb 19 13:58:16 crc kubenswrapper[4833]: I0219 13:58:16.617613 4833 scope.go:117] "RemoveContainer" containerID="c8997d03737db42a2d58f2936e1f212ab53c9697cb8c50a8c4a60174788b9509" Feb 19 13:58:30 crc kubenswrapper[4833]: I0219 13:58:30.908624 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8ldvn"] Feb 19 13:58:30 crc kubenswrapper[4833]: E0219 13:58:30.909994 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebaf71b-7f29-461f-b646-7ef093ea6665" containerName="extract-utilities" Feb 19 13:58:30 crc kubenswrapper[4833]: I0219 13:58:30.910019 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebaf71b-7f29-461f-b646-7ef093ea6665" containerName="extract-utilities" Feb 19 13:58:30 crc kubenswrapper[4833]: E0219 13:58:30.910054 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f637b54a-2e35-4f05-a5cf-204c9cc9154a" containerName="gather" Feb 19 13:58:30 crc kubenswrapper[4833]: I0219 13:58:30.910066 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f637b54a-2e35-4f05-a5cf-204c9cc9154a" containerName="gather" Feb 19 13:58:30 crc kubenswrapper[4833]: E0219 13:58:30.910089 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebaf71b-7f29-461f-b646-7ef093ea6665" containerName="registry-server" Feb 19 13:58:30 crc kubenswrapper[4833]: I0219 13:58:30.910106 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebaf71b-7f29-461f-b646-7ef093ea6665" containerName="registry-server" Feb 19 13:58:30 crc kubenswrapper[4833]: E0219 13:58:30.910169 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebaf71b-7f29-461f-b646-7ef093ea6665" containerName="extract-content" Feb 19 13:58:30 crc kubenswrapper[4833]: I0219 13:58:30.910184 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebaf71b-7f29-461f-b646-7ef093ea6665" containerName="extract-content" Feb 19 13:58:30 crc kubenswrapper[4833]: E0219 13:58:30.910202 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f637b54a-2e35-4f05-a5cf-204c9cc9154a" containerName="copy" Feb 19 13:58:30 crc kubenswrapper[4833]: I0219 13:58:30.910217 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="f637b54a-2e35-4f05-a5cf-204c9cc9154a" containerName="copy" Feb 19 13:58:30 crc kubenswrapper[4833]: I0219 13:58:30.910650 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f637b54a-2e35-4f05-a5cf-204c9cc9154a" containerName="gather" Feb 19 13:58:30 crc kubenswrapper[4833]: I0219 13:58:30.910669 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ebaf71b-7f29-461f-b646-7ef093ea6665" containerName="registry-server" Feb 19 13:58:30 crc kubenswrapper[4833]: I0219 13:58:30.910701 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="f637b54a-2e35-4f05-a5cf-204c9cc9154a" containerName="copy" Feb 19 13:58:30 crc kubenswrapper[4833]: I0219 13:58:30.913056 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:30 crc kubenswrapper[4833]: I0219 13:58:30.926708 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ldvn"] Feb 19 13:58:31 crc kubenswrapper[4833]: I0219 13:58:31.033736 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-utilities\") pod \"community-operators-8ldvn\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:31 crc kubenswrapper[4833]: I0219 13:58:31.033821 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qwvp\" (UniqueName: \"kubernetes.io/projected/5c592ff7-ee56-4868-bb71-d406b18eba7c-kube-api-access-4qwvp\") pod \"community-operators-8ldvn\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:31 crc kubenswrapper[4833]: I0219 13:58:31.034015 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-catalog-content\") pod \"community-operators-8ldvn\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:31 crc kubenswrapper[4833]: I0219 13:58:31.136182 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-utilities\") pod \"community-operators-8ldvn\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:31 crc kubenswrapper[4833]: I0219 13:58:31.136265 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qwvp\" (UniqueName: \"kubernetes.io/projected/5c592ff7-ee56-4868-bb71-d406b18eba7c-kube-api-access-4qwvp\") pod \"community-operators-8ldvn\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:31 crc kubenswrapper[4833]: I0219 13:58:31.136300 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-catalog-content\") pod \"community-operators-8ldvn\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:31 crc kubenswrapper[4833]: I0219 13:58:31.137079 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-catalog-content\") pod \"community-operators-8ldvn\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:31 crc kubenswrapper[4833]: I0219 13:58:31.137209 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-utilities\") pod \"community-operators-8ldvn\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:31 crc kubenswrapper[4833]: I0219 13:58:31.157706 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qwvp\" (UniqueName: \"kubernetes.io/projected/5c592ff7-ee56-4868-bb71-d406b18eba7c-kube-api-access-4qwvp\") pod \"community-operators-8ldvn\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:31 crc kubenswrapper[4833]: I0219 13:58:31.239630 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:31 crc kubenswrapper[4833]: W0219 13:58:31.853781 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c592ff7_ee56_4868_bb71_d406b18eba7c.slice/crio-e95ea9916f3e8f216a227950a946f95001959cddc318168c2fadcb4b2aa4436d WatchSource:0}: Error finding container e95ea9916f3e8f216a227950a946f95001959cddc318168c2fadcb4b2aa4436d: Status 404 returned error can't find the container with id e95ea9916f3e8f216a227950a946f95001959cddc318168c2fadcb4b2aa4436d Feb 19 13:58:31 crc kubenswrapper[4833]: I0219 13:58:31.854261 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ldvn"] Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.699691 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nzn2w"] Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.704246 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.716334 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzn2w"] Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.785708 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-utilities\") pod \"redhat-marketplace-nzn2w\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.786485 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-catalog-content\") pod \"redhat-marketplace-nzn2w\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.786793 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whwm8\" (UniqueName: \"kubernetes.io/projected/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-kube-api-access-whwm8\") pod \"redhat-marketplace-nzn2w\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.828901 4833 generic.go:334] "Generic (PLEG): container finished" podID="5c592ff7-ee56-4868-bb71-d406b18eba7c" containerID="396774baef68f419a82ae91cab4a21c2d5f991b89b7ae27f079857f8cd527e72" exitCode=0 Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.828947 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ldvn" event={"ID":"5c592ff7-ee56-4868-bb71-d406b18eba7c","Type":"ContainerDied","Data":"396774baef68f419a82ae91cab4a21c2d5f991b89b7ae27f079857f8cd527e72"} Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.829010 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ldvn" event={"ID":"5c592ff7-ee56-4868-bb71-d406b18eba7c","Type":"ContainerStarted","Data":"e95ea9916f3e8f216a227950a946f95001959cddc318168c2fadcb4b2aa4436d"} Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.889616 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whwm8\" (UniqueName: \"kubernetes.io/projected/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-kube-api-access-whwm8\") pod \"redhat-marketplace-nzn2w\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.889996 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-utilities\") pod \"redhat-marketplace-nzn2w\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.890136 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-catalog-content\") pod \"redhat-marketplace-nzn2w\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.891643 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-utilities\") pod \"redhat-marketplace-nzn2w\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.891813 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-catalog-content\") pod \"redhat-marketplace-nzn2w\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:32 crc kubenswrapper[4833]: I0219 13:58:32.914077 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whwm8\" (UniqueName: \"kubernetes.io/projected/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-kube-api-access-whwm8\") pod \"redhat-marketplace-nzn2w\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:33 crc kubenswrapper[4833]: I0219 13:58:33.032150 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:33 crc kubenswrapper[4833]: I0219 13:58:33.498507 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzn2w"] Feb 19 13:58:33 crc kubenswrapper[4833]: W0219 13:58:33.510244 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ae65f4d_7a2a_4bdc_b5f9_993b944fdbfc.slice/crio-2156939fcc90e2dc81e8052c27cda95360b6a2e5fe96588d214327de1e8d8118 WatchSource:0}: Error finding container 2156939fcc90e2dc81e8052c27cda95360b6a2e5fe96588d214327de1e8d8118: Status 404 returned error can't find the container with id 2156939fcc90e2dc81e8052c27cda95360b6a2e5fe96588d214327de1e8d8118 Feb 19 13:58:33 crc kubenswrapper[4833]: I0219 13:58:33.838817 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzn2w" event={"ID":"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc","Type":"ContainerStarted","Data":"2156939fcc90e2dc81e8052c27cda95360b6a2e5fe96588d214327de1e8d8118"} Feb 19 13:58:34 crc kubenswrapper[4833]: I0219 13:58:34.851850 4833 generic.go:334] "Generic (PLEG): container finished" podID="5c592ff7-ee56-4868-bb71-d406b18eba7c" containerID="660a55ddbf1d8ac9571623252220c2cdccff95ec93894a4425aab2b27eb46557" exitCode=0 Feb 19 13:58:34 crc kubenswrapper[4833]: I0219 13:58:34.851936 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ldvn" event={"ID":"5c592ff7-ee56-4868-bb71-d406b18eba7c","Type":"ContainerDied","Data":"660a55ddbf1d8ac9571623252220c2cdccff95ec93894a4425aab2b27eb46557"} Feb 19 13:58:34 crc kubenswrapper[4833]: I0219 13:58:34.858672 4833 generic.go:334] "Generic (PLEG): container finished" podID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" containerID="b540866fc32cc22085258c1cbcd22bcacd5b15bdf5b6e254e154d759e4cf3467" exitCode=0 Feb 19 13:58:34 crc kubenswrapper[4833]: I0219 13:58:34.858723 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzn2w" event={"ID":"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc","Type":"ContainerDied","Data":"b540866fc32cc22085258c1cbcd22bcacd5b15bdf5b6e254e154d759e4cf3467"} Feb 19 13:58:35 crc kubenswrapper[4833]: I0219 13:58:35.874014 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ldvn" event={"ID":"5c592ff7-ee56-4868-bb71-d406b18eba7c","Type":"ContainerStarted","Data":"bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099"} Feb 19 13:58:35 crc kubenswrapper[4833]: I0219 13:58:35.898784 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8ldvn" podStartSLOduration=3.481568495 podStartE2EDuration="5.898760476s" podCreationTimestamp="2026-02-19 13:58:30 +0000 UTC" firstStartedPulling="2026-02-19 13:58:32.831113993 +0000 UTC m=+4323.226632761" lastFinishedPulling="2026-02-19 13:58:35.248305934 +0000 UTC m=+4325.643824742" observedRunningTime="2026-02-19 13:58:35.893111147 +0000 UTC m=+4326.288629935" watchObservedRunningTime="2026-02-19 13:58:35.898760476 +0000 UTC m=+4326.294279244" Feb 19 13:58:36 crc kubenswrapper[4833]: I0219 13:58:36.890692 4833 generic.go:334] "Generic (PLEG): container finished" podID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" containerID="47d549aab3b85c8d67f42b6882c2f4cb018f29fdf7839999f1d81ed85c0d8bfb" exitCode=0 Feb 19 13:58:36 crc kubenswrapper[4833]: I0219 13:58:36.890794 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzn2w" event={"ID":"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc","Type":"ContainerDied","Data":"47d549aab3b85c8d67f42b6882c2f4cb018f29fdf7839999f1d81ed85c0d8bfb"} Feb 19 13:58:37 crc kubenswrapper[4833]: I0219 13:58:37.902645 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzn2w" event={"ID":"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc","Type":"ContainerStarted","Data":"3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20"} Feb 19 13:58:37 crc kubenswrapper[4833]: I0219 13:58:37.925605 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nzn2w" podStartSLOduration=3.472356068 podStartE2EDuration="5.925586102s" podCreationTimestamp="2026-02-19 13:58:32 +0000 UTC" firstStartedPulling="2026-02-19 13:58:34.860296991 +0000 UTC m=+4325.255815769" lastFinishedPulling="2026-02-19 13:58:37.313527025 +0000 UTC m=+4327.709045803" observedRunningTime="2026-02-19 13:58:37.921488683 +0000 UTC m=+4328.317007461" watchObservedRunningTime="2026-02-19 13:58:37.925586102 +0000 UTC m=+4328.321104870" Feb 19 13:58:41 crc kubenswrapper[4833]: I0219 13:58:41.239934 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:41 crc kubenswrapper[4833]: I0219 13:58:41.240836 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:41 crc kubenswrapper[4833]: I0219 13:58:41.327276 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:42 crc kubenswrapper[4833]: I0219 13:58:42.020637 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:42 crc kubenswrapper[4833]: I0219 13:58:42.093220 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ldvn"] Feb 19 13:58:43 crc kubenswrapper[4833]: I0219 13:58:43.032948 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:43 crc kubenswrapper[4833]: I0219 13:58:43.034690 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:43 crc kubenswrapper[4833]: I0219 13:58:43.082346 4833 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:43 crc kubenswrapper[4833]: I0219 13:58:43.969617 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8ldvn" podUID="5c592ff7-ee56-4868-bb71-d406b18eba7c" containerName="registry-server" containerID="cri-o://bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099" gracePeriod=2 Feb 19 13:58:44 crc kubenswrapper[4833]: I0219 13:58:44.031776 4833 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:44 crc kubenswrapper[4833]: I0219 13:58:44.275937 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzn2w"] Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.523669 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.652740 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-catalog-content\") pod \"5c592ff7-ee56-4868-bb71-d406b18eba7c\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.652792 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qwvp\" (UniqueName: \"kubernetes.io/projected/5c592ff7-ee56-4868-bb71-d406b18eba7c-kube-api-access-4qwvp\") pod \"5c592ff7-ee56-4868-bb71-d406b18eba7c\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.652872 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-utilities\") pod \"5c592ff7-ee56-4868-bb71-d406b18eba7c\" (UID: \"5c592ff7-ee56-4868-bb71-d406b18eba7c\") " Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.655165 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-utilities" (OuterVolumeSpecName: "utilities") pod "5c592ff7-ee56-4868-bb71-d406b18eba7c" (UID: "5c592ff7-ee56-4868-bb71-d406b18eba7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.662360 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c592ff7-ee56-4868-bb71-d406b18eba7c-kube-api-access-4qwvp" (OuterVolumeSpecName: "kube-api-access-4qwvp") pod "5c592ff7-ee56-4868-bb71-d406b18eba7c" (UID: "5c592ff7-ee56-4868-bb71-d406b18eba7c"). InnerVolumeSpecName "kube-api-access-4qwvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.755028 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qwvp\" (UniqueName: \"kubernetes.io/projected/5c592ff7-ee56-4868-bb71-d406b18eba7c-kube-api-access-4qwvp\") on node \"crc\" DevicePath \"\"" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.755060 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.822184 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c592ff7-ee56-4868-bb71-d406b18eba7c" (UID: "5c592ff7-ee56-4868-bb71-d406b18eba7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.856429 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c592ff7-ee56-4868-bb71-d406b18eba7c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.978379 4833 generic.go:334] "Generic (PLEG): container finished" podID="5c592ff7-ee56-4868-bb71-d406b18eba7c" containerID="bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099" exitCode=0 Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.978441 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ldvn" event={"ID":"5c592ff7-ee56-4868-bb71-d406b18eba7c","Type":"ContainerDied","Data":"bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099"} Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.978482 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ldvn" event={"ID":"5c592ff7-ee56-4868-bb71-d406b18eba7c","Type":"ContainerDied","Data":"e95ea9916f3e8f216a227950a946f95001959cddc318168c2fadcb4b2aa4436d"} Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.978511 4833 scope.go:117] "RemoveContainer" containerID="bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:44.978512 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ldvn" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:45.011034 4833 scope.go:117] "RemoveContainer" containerID="660a55ddbf1d8ac9571623252220c2cdccff95ec93894a4425aab2b27eb46557" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:45.031085 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ldvn"] Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:45.046854 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8ldvn"] Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:45.055557 4833 scope.go:117] "RemoveContainer" containerID="396774baef68f419a82ae91cab4a21c2d5f991b89b7ae27f079857f8cd527e72" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:45.089940 4833 scope.go:117] "RemoveContainer" containerID="bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099" Feb 19 13:58:45 crc kubenswrapper[4833]: E0219 13:58:45.097155 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099\": container with ID starting with bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099 not found: ID does not exist" containerID="bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:45.097202 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099"} err="failed to get container status \"bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099\": rpc error: code = NotFound desc = could not find container \"bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099\": container with ID starting with bf44af1bac6ac040e3ddb12d97e6035c00d4121d276e6003f1a54fe66bb4b099 not found: ID does not exist" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:45.097230 4833 scope.go:117] "RemoveContainer" containerID="660a55ddbf1d8ac9571623252220c2cdccff95ec93894a4425aab2b27eb46557" Feb 19 13:58:45 crc kubenswrapper[4833]: E0219 13:58:45.097635 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"660a55ddbf1d8ac9571623252220c2cdccff95ec93894a4425aab2b27eb46557\": container with ID starting with 660a55ddbf1d8ac9571623252220c2cdccff95ec93894a4425aab2b27eb46557 not found: ID does not exist" containerID="660a55ddbf1d8ac9571623252220c2cdccff95ec93894a4425aab2b27eb46557" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:45.097701 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"660a55ddbf1d8ac9571623252220c2cdccff95ec93894a4425aab2b27eb46557"} err="failed to get container status \"660a55ddbf1d8ac9571623252220c2cdccff95ec93894a4425aab2b27eb46557\": rpc error: code = NotFound desc = could not find container \"660a55ddbf1d8ac9571623252220c2cdccff95ec93894a4425aab2b27eb46557\": container with ID starting with 660a55ddbf1d8ac9571623252220c2cdccff95ec93894a4425aab2b27eb46557 not found: ID does not exist" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:45.097751 4833 scope.go:117] "RemoveContainer" containerID="396774baef68f419a82ae91cab4a21c2d5f991b89b7ae27f079857f8cd527e72" Feb 19 13:58:45 crc kubenswrapper[4833]: E0219 13:58:45.098285 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"396774baef68f419a82ae91cab4a21c2d5f991b89b7ae27f079857f8cd527e72\": container with ID starting with 396774baef68f419a82ae91cab4a21c2d5f991b89b7ae27f079857f8cd527e72 not found: ID does not exist" containerID="396774baef68f419a82ae91cab4a21c2d5f991b89b7ae27f079857f8cd527e72" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:45.098322 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"396774baef68f419a82ae91cab4a21c2d5f991b89b7ae27f079857f8cd527e72"} err="failed to get container status \"396774baef68f419a82ae91cab4a21c2d5f991b89b7ae27f079857f8cd527e72\": rpc error: code = NotFound desc = could not find container \"396774baef68f419a82ae91cab4a21c2d5f991b89b7ae27f079857f8cd527e72\": container with ID starting with 396774baef68f419a82ae91cab4a21c2d5f991b89b7ae27f079857f8cd527e72 not found: ID does not exist" Feb 19 13:58:45 crc kubenswrapper[4833]: I0219 13:58:45.990966 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nzn2w" podUID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" containerName="registry-server" containerID="cri-o://3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20" gracePeriod=2 Feb 19 13:58:46 crc kubenswrapper[4833]: I0219 13:58:46.327040 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c592ff7-ee56-4868-bb71-d406b18eba7c" path="/var/lib/kubelet/pods/5c592ff7-ee56-4868-bb71-d406b18eba7c/volumes" Feb 19 13:58:46 crc kubenswrapper[4833]: I0219 13:58:46.477039 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:46 crc kubenswrapper[4833]: I0219 13:58:46.591227 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-utilities\") pod \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " Feb 19 13:58:46 crc kubenswrapper[4833]: I0219 13:58:46.591298 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whwm8\" (UniqueName: \"kubernetes.io/projected/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-kube-api-access-whwm8\") pod \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " Feb 19 13:58:46 crc kubenswrapper[4833]: I0219 13:58:46.591397 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-catalog-content\") pod \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\" (UID: \"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc\") " Feb 19 13:58:46 crc kubenswrapper[4833]: I0219 13:58:46.592895 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-utilities" (OuterVolumeSpecName: "utilities") pod "5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" (UID: "5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:58:46 crc kubenswrapper[4833]: I0219 13:58:46.602276 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-kube-api-access-whwm8" (OuterVolumeSpecName: "kube-api-access-whwm8") pod "5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" (UID: "5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc"). InnerVolumeSpecName "kube-api-access-whwm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 13:58:46 crc kubenswrapper[4833]: I0219 13:58:46.631653 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" (UID: "5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 19 13:58:46 crc kubenswrapper[4833]: I0219 13:58:46.692998 4833 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 13:58:46 crc kubenswrapper[4833]: I0219 13:58:46.693037 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whwm8\" (UniqueName: \"kubernetes.io/projected/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-kube-api-access-whwm8\") on node \"crc\" DevicePath \"\"" Feb 19 13:58:46 crc kubenswrapper[4833]: I0219 13:58:46.693049 4833 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.005790 4833 generic.go:334] "Generic (PLEG): container finished" podID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" containerID="3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20" exitCode=0 Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.005867 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzn2w" event={"ID":"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc","Type":"ContainerDied","Data":"3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20"} Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.005930 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nzn2w" event={"ID":"5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc","Type":"ContainerDied","Data":"2156939fcc90e2dc81e8052c27cda95360b6a2e5fe96588d214327de1e8d8118"} Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.005959 4833 scope.go:117] "RemoveContainer" containerID="3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20" Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.006712 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nzn2w" Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.043178 4833 scope.go:117] "RemoveContainer" containerID="47d549aab3b85c8d67f42b6882c2f4cb018f29fdf7839999f1d81ed85c0d8bfb" Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.044775 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzn2w"] Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.053878 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nzn2w"] Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.074534 4833 scope.go:117] "RemoveContainer" containerID="b540866fc32cc22085258c1cbcd22bcacd5b15bdf5b6e254e154d759e4cf3467" Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.128585 4833 scope.go:117] "RemoveContainer" containerID="3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20" Feb 19 13:58:47 crc kubenswrapper[4833]: E0219 13:58:47.129170 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20\": container with ID starting with 3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20 not found: ID does not exist" containerID="3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20" Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.129207 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20"} err="failed to get container status \"3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20\": rpc error: code = NotFound desc = could not find container \"3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20\": container with ID starting with 3804135512d49405fac3a9db0db927b8adb6d3c33943ed791ba2db7333e22d20 not found: ID does not exist" Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.129231 4833 scope.go:117] "RemoveContainer" containerID="47d549aab3b85c8d67f42b6882c2f4cb018f29fdf7839999f1d81ed85c0d8bfb" Feb 19 13:58:47 crc kubenswrapper[4833]: E0219 13:58:47.129843 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d549aab3b85c8d67f42b6882c2f4cb018f29fdf7839999f1d81ed85c0d8bfb\": container with ID starting with 47d549aab3b85c8d67f42b6882c2f4cb018f29fdf7839999f1d81ed85c0d8bfb not found: ID does not exist" containerID="47d549aab3b85c8d67f42b6882c2f4cb018f29fdf7839999f1d81ed85c0d8bfb" Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.129915 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d549aab3b85c8d67f42b6882c2f4cb018f29fdf7839999f1d81ed85c0d8bfb"} err="failed to get container status \"47d549aab3b85c8d67f42b6882c2f4cb018f29fdf7839999f1d81ed85c0d8bfb\": rpc error: code = NotFound desc = could not find container \"47d549aab3b85c8d67f42b6882c2f4cb018f29fdf7839999f1d81ed85c0d8bfb\": container with ID starting with 47d549aab3b85c8d67f42b6882c2f4cb018f29fdf7839999f1d81ed85c0d8bfb not found: ID does not exist" Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.129956 4833 scope.go:117] "RemoveContainer" containerID="b540866fc32cc22085258c1cbcd22bcacd5b15bdf5b6e254e154d759e4cf3467" Feb 19 13:58:47 crc kubenswrapper[4833]: E0219 13:58:47.130309 4833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b540866fc32cc22085258c1cbcd22bcacd5b15bdf5b6e254e154d759e4cf3467\": container with ID starting with b540866fc32cc22085258c1cbcd22bcacd5b15bdf5b6e254e154d759e4cf3467 not found: ID does not exist" containerID="b540866fc32cc22085258c1cbcd22bcacd5b15bdf5b6e254e154d759e4cf3467" Feb 19 13:58:47 crc kubenswrapper[4833]: I0219 13:58:47.130354 4833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b540866fc32cc22085258c1cbcd22bcacd5b15bdf5b6e254e154d759e4cf3467"} err="failed to get container status \"b540866fc32cc22085258c1cbcd22bcacd5b15bdf5b6e254e154d759e4cf3467\": rpc error: code = NotFound desc = could not find container \"b540866fc32cc22085258c1cbcd22bcacd5b15bdf5b6e254e154d759e4cf3467\": container with ID starting with b540866fc32cc22085258c1cbcd22bcacd5b15bdf5b6e254e154d759e4cf3467 not found: ID does not exist" Feb 19 13:58:48 crc kubenswrapper[4833]: I0219 13:58:48.337314 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" path="/var/lib/kubelet/pods/5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc/volumes" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.194685 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf"] Feb 19 14:00:00 crc kubenswrapper[4833]: E0219 14:00:00.195748 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c592ff7-ee56-4868-bb71-d406b18eba7c" containerName="extract-utilities" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.195769 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c592ff7-ee56-4868-bb71-d406b18eba7c" containerName="extract-utilities" Feb 19 14:00:00 crc kubenswrapper[4833]: E0219 14:00:00.195797 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" containerName="extract-utilities" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.195806 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" containerName="extract-utilities" Feb 19 14:00:00 crc kubenswrapper[4833]: E0219 14:00:00.195833 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" containerName="extract-content" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.195842 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" containerName="extract-content" Feb 19 14:00:00 crc kubenswrapper[4833]: E0219 14:00:00.195855 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c592ff7-ee56-4868-bb71-d406b18eba7c" containerName="registry-server" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.195862 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c592ff7-ee56-4868-bb71-d406b18eba7c" containerName="registry-server" Feb 19 14:00:00 crc kubenswrapper[4833]: E0219 14:00:00.195884 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c592ff7-ee56-4868-bb71-d406b18eba7c" containerName="extract-content" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.195893 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c592ff7-ee56-4868-bb71-d406b18eba7c" containerName="extract-content" Feb 19 14:00:00 crc kubenswrapper[4833]: E0219 14:00:00.195909 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" containerName="registry-server" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.195917 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" containerName="registry-server" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.196130 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ae65f4d-7a2a-4bdc-b5f9-993b944fdbfc" containerName="registry-server" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.196162 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c592ff7-ee56-4868-bb71-d406b18eba7c" containerName="registry-server" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.196878 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.202732 4833 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.204189 4833 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.205009 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf"] Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.308092 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/def85b1b-30c5-4d74-a7b0-e2be473946cb-config-volume\") pod \"collect-profiles-29525160-hwzqf\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.308183 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9k9r\" (UniqueName: \"kubernetes.io/projected/def85b1b-30c5-4d74-a7b0-e2be473946cb-kube-api-access-w9k9r\") pod \"collect-profiles-29525160-hwzqf\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.308226 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/def85b1b-30c5-4d74-a7b0-e2be473946cb-secret-volume\") pod \"collect-profiles-29525160-hwzqf\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.412332 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/def85b1b-30c5-4d74-a7b0-e2be473946cb-config-volume\") pod \"collect-profiles-29525160-hwzqf\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.413606 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/def85b1b-30c5-4d74-a7b0-e2be473946cb-config-volume\") pod \"collect-profiles-29525160-hwzqf\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.414074 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9k9r\" (UniqueName: \"kubernetes.io/projected/def85b1b-30c5-4d74-a7b0-e2be473946cb-kube-api-access-w9k9r\") pod \"collect-profiles-29525160-hwzqf\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.414452 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/def85b1b-30c5-4d74-a7b0-e2be473946cb-secret-volume\") pod \"collect-profiles-29525160-hwzqf\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.423436 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/def85b1b-30c5-4d74-a7b0-e2be473946cb-secret-volume\") pod \"collect-profiles-29525160-hwzqf\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.432111 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9k9r\" (UniqueName: \"kubernetes.io/projected/def85b1b-30c5-4d74-a7b0-e2be473946cb-kube-api-access-w9k9r\") pod \"collect-profiles-29525160-hwzqf\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:00 crc kubenswrapper[4833]: I0219 14:00:00.525708 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:01 crc kubenswrapper[4833]: I0219 14:00:01.057179 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf"] Feb 19 14:00:01 crc kubenswrapper[4833]: W0219 14:00:01.063083 4833 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddef85b1b_30c5_4d74_a7b0_e2be473946cb.slice/crio-aac6e7c980cfa80ec2a6d7d4b87e7929c12e055750764f8bcd56cbaf5d5a779c WatchSource:0}: Error finding container aac6e7c980cfa80ec2a6d7d4b87e7929c12e055750764f8bcd56cbaf5d5a779c: Status 404 returned error can't find the container with id aac6e7c980cfa80ec2a6d7d4b87e7929c12e055750764f8bcd56cbaf5d5a779c Feb 19 14:00:01 crc kubenswrapper[4833]: I0219 14:00:01.854132 4833 generic.go:334] "Generic (PLEG): container finished" podID="def85b1b-30c5-4d74-a7b0-e2be473946cb" containerID="61491abd303ca9e831001cf7816f38a465f3a864271e847a5bbf479cc53aeff7" exitCode=0 Feb 19 14:00:01 crc kubenswrapper[4833]: I0219 14:00:01.854252 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" event={"ID":"def85b1b-30c5-4d74-a7b0-e2be473946cb","Type":"ContainerDied","Data":"61491abd303ca9e831001cf7816f38a465f3a864271e847a5bbf479cc53aeff7"} Feb 19 14:00:01 crc kubenswrapper[4833]: I0219 14:00:01.854564 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" event={"ID":"def85b1b-30c5-4d74-a7b0-e2be473946cb","Type":"ContainerStarted","Data":"aac6e7c980cfa80ec2a6d7d4b87e7929c12e055750764f8bcd56cbaf5d5a779c"} Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.231510 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.373451 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/def85b1b-30c5-4d74-a7b0-e2be473946cb-config-volume\") pod \"def85b1b-30c5-4d74-a7b0-e2be473946cb\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.373895 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9k9r\" (UniqueName: \"kubernetes.io/projected/def85b1b-30c5-4d74-a7b0-e2be473946cb-kube-api-access-w9k9r\") pod \"def85b1b-30c5-4d74-a7b0-e2be473946cb\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.373955 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/def85b1b-30c5-4d74-a7b0-e2be473946cb-secret-volume\") pod \"def85b1b-30c5-4d74-a7b0-e2be473946cb\" (UID: \"def85b1b-30c5-4d74-a7b0-e2be473946cb\") " Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.374689 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/def85b1b-30c5-4d74-a7b0-e2be473946cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "def85b1b-30c5-4d74-a7b0-e2be473946cb" (UID: "def85b1b-30c5-4d74-a7b0-e2be473946cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.476716 4833 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/def85b1b-30c5-4d74-a7b0-e2be473946cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.650805 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/def85b1b-30c5-4d74-a7b0-e2be473946cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "def85b1b-30c5-4d74-a7b0-e2be473946cb" (UID: "def85b1b-30c5-4d74-a7b0-e2be473946cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.653731 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/def85b1b-30c5-4d74-a7b0-e2be473946cb-kube-api-access-w9k9r" (OuterVolumeSpecName: "kube-api-access-w9k9r") pod "def85b1b-30c5-4d74-a7b0-e2be473946cb" (UID: "def85b1b-30c5-4d74-a7b0-e2be473946cb"). InnerVolumeSpecName "kube-api-access-w9k9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.681043 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9k9r\" (UniqueName: \"kubernetes.io/projected/def85b1b-30c5-4d74-a7b0-e2be473946cb-kube-api-access-w9k9r\") on node \"crc\" DevicePath \"\"" Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.681093 4833 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/def85b1b-30c5-4d74-a7b0-e2be473946cb-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.871333 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" event={"ID":"def85b1b-30c5-4d74-a7b0-e2be473946cb","Type":"ContainerDied","Data":"aac6e7c980cfa80ec2a6d7d4b87e7929c12e055750764f8bcd56cbaf5d5a779c"} Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.871374 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac6e7c980cfa80ec2a6d7d4b87e7929c12e055750764f8bcd56cbaf5d5a779c" Feb 19 14:00:03 crc kubenswrapper[4833]: I0219 14:00:03.871436 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29525160-hwzqf" Feb 19 14:00:04 crc kubenswrapper[4833]: I0219 14:00:04.378630 4833 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj"] Feb 19 14:00:04 crc kubenswrapper[4833]: I0219 14:00:04.384176 4833 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29525115-r54cj"] Feb 19 14:00:06 crc kubenswrapper[4833]: I0219 14:00:06.333626 4833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cff0514-b00f-44f8-8193-851e8e1c2716" path="/var/lib/kubelet/pods/4cff0514-b00f-44f8-8193-851e8e1c2716/volumes" Feb 19 14:00:08 crc kubenswrapper[4833]: I0219 14:00:08.888582 4833 scope.go:117] "RemoveContainer" containerID="914662436eca46cd19990eedca74f5061baf0a75ee2122a618abbddb6a5a2d1d" Feb 19 14:00:45 crc kubenswrapper[4833]: I0219 14:00:45.744319 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:00:45 crc kubenswrapper[4833]: I0219 14:00:45.744974 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.179271 4833 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29525161-ktb9w"] Feb 19 14:01:00 crc kubenswrapper[4833]: E0219 14:01:00.180662 4833 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="def85b1b-30c5-4d74-a7b0-e2be473946cb" containerName="collect-profiles" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.180687 4833 state_mem.go:107] "Deleted CPUSet assignment" podUID="def85b1b-30c5-4d74-a7b0-e2be473946cb" containerName="collect-profiles" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.181031 4833 memory_manager.go:354] "RemoveStaleState removing state" podUID="def85b1b-30c5-4d74-a7b0-e2be473946cb" containerName="collect-profiles" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.182131 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.205911 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525161-ktb9w"] Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.323019 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-combined-ca-bundle\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.323319 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkr44\" (UniqueName: \"kubernetes.io/projected/6000663c-51be-464e-a628-0a96371eac3a-kube-api-access-mkr44\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.323348 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-config-data\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.323409 4833 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-fernet-keys\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.427884 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-fernet-keys\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.428274 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-combined-ca-bundle\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.428349 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkr44\" (UniqueName: \"kubernetes.io/projected/6000663c-51be-464e-a628-0a96371eac3a-kube-api-access-mkr44\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.428425 4833 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-config-data\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.437374 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-fernet-keys\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.443773 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-config-data\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.445162 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-combined-ca-bundle\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.461288 4833 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkr44\" (UniqueName: \"kubernetes.io/projected/6000663c-51be-464e-a628-0a96371eac3a-kube-api-access-mkr44\") pod \"keystone-cron-29525161-ktb9w\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:00 crc kubenswrapper[4833]: I0219 14:01:00.534204 4833 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:01 crc kubenswrapper[4833]: I0219 14:01:01.050985 4833 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29525161-ktb9w"] Feb 19 14:01:01 crc kubenswrapper[4833]: I0219 14:01:01.554121 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525161-ktb9w" event={"ID":"6000663c-51be-464e-a628-0a96371eac3a","Type":"ContainerStarted","Data":"d7ed3a9417158b1370f1dcfc3ac31571a87168986a8e1f6b28604a3f836953c7"} Feb 19 14:01:01 crc kubenswrapper[4833]: I0219 14:01:01.556864 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525161-ktb9w" event={"ID":"6000663c-51be-464e-a628-0a96371eac3a","Type":"ContainerStarted","Data":"3fd631d6a9391e02e8070258d016ef58e148040a3e2f5c5815ddd38377d420dc"} Feb 19 14:01:01 crc kubenswrapper[4833]: I0219 14:01:01.589547 4833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29525161-ktb9w" podStartSLOduration=1.5895253550000001 podStartE2EDuration="1.589525355s" podCreationTimestamp="2026-02-19 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 14:01:01.573343838 +0000 UTC m=+4471.968862636" watchObservedRunningTime="2026-02-19 14:01:01.589525355 +0000 UTC m=+4471.985044123" Feb 19 14:01:03 crc kubenswrapper[4833]: I0219 14:01:03.579603 4833 generic.go:334] "Generic (PLEG): container finished" podID="6000663c-51be-464e-a628-0a96371eac3a" containerID="d7ed3a9417158b1370f1dcfc3ac31571a87168986a8e1f6b28604a3f836953c7" exitCode=0 Feb 19 14:01:03 crc kubenswrapper[4833]: I0219 14:01:03.579676 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525161-ktb9w" event={"ID":"6000663c-51be-464e-a628-0a96371eac3a","Type":"ContainerDied","Data":"d7ed3a9417158b1370f1dcfc3ac31571a87168986a8e1f6b28604a3f836953c7"} Feb 19 14:01:04 crc kubenswrapper[4833]: I0219 14:01:04.986211 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.136369 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-config-data\") pod \"6000663c-51be-464e-a628-0a96371eac3a\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.137392 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-combined-ca-bundle\") pod \"6000663c-51be-464e-a628-0a96371eac3a\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.137586 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkr44\" (UniqueName: \"kubernetes.io/projected/6000663c-51be-464e-a628-0a96371eac3a-kube-api-access-mkr44\") pod \"6000663c-51be-464e-a628-0a96371eac3a\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.137645 4833 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-fernet-keys\") pod \"6000663c-51be-464e-a628-0a96371eac3a\" (UID: \"6000663c-51be-464e-a628-0a96371eac3a\") " Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.143468 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6000663c-51be-464e-a628-0a96371eac3a-kube-api-access-mkr44" (OuterVolumeSpecName: "kube-api-access-mkr44") pod "6000663c-51be-464e-a628-0a96371eac3a" (UID: "6000663c-51be-464e-a628-0a96371eac3a"). InnerVolumeSpecName "kube-api-access-mkr44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.146099 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6000663c-51be-464e-a628-0a96371eac3a" (UID: "6000663c-51be-464e-a628-0a96371eac3a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.180763 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6000663c-51be-464e-a628-0a96371eac3a" (UID: "6000663c-51be-464e-a628-0a96371eac3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.214733 4833 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-config-data" (OuterVolumeSpecName: "config-data") pod "6000663c-51be-464e-a628-0a96371eac3a" (UID: "6000663c-51be-464e-a628-0a96371eac3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.239611 4833 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-config-data\") on node \"crc\" DevicePath \"\"" Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.239656 4833 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.239671 4833 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkr44\" (UniqueName: \"kubernetes.io/projected/6000663c-51be-464e-a628-0a96371eac3a-kube-api-access-mkr44\") on node \"crc\" DevicePath \"\"" Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.239683 4833 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6000663c-51be-464e-a628-0a96371eac3a-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.602011 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29525161-ktb9w" event={"ID":"6000663c-51be-464e-a628-0a96371eac3a","Type":"ContainerDied","Data":"3fd631d6a9391e02e8070258d016ef58e148040a3e2f5c5815ddd38377d420dc"} Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.602055 4833 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fd631d6a9391e02e8070258d016ef58e148040a3e2f5c5815ddd38377d420dc" Feb 19 14:01:05 crc kubenswrapper[4833]: I0219 14:01:05.602110 4833 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29525161-ktb9w" Feb 19 14:01:15 crc kubenswrapper[4833]: I0219 14:01:15.745027 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:01:15 crc kubenswrapper[4833]: I0219 14:01:15.745578 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:01:45 crc kubenswrapper[4833]: I0219 14:01:45.744615 4833 patch_prober.go:28] interesting pod/machine-config-daemon-c2lxp container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 14:01:45 crc kubenswrapper[4833]: I0219 14:01:45.745188 4833 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 14:01:45 crc kubenswrapper[4833]: I0219 14:01:45.745237 4833 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" Feb 19 14:01:45 crc kubenswrapper[4833]: I0219 14:01:45.746657 4833 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c504a76ab972f68304dd5b4c14ea92d43763cacdcefa21ea2554701acbc166b"} pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 14:01:45 crc kubenswrapper[4833]: I0219 14:01:45.746765 4833 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerName="machine-config-daemon" containerID="cri-o://6c504a76ab972f68304dd5b4c14ea92d43763cacdcefa21ea2554701acbc166b" gracePeriod=600 Feb 19 14:01:45 crc kubenswrapper[4833]: E0219 14:01:45.895918 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b" Feb 19 14:01:46 crc kubenswrapper[4833]: I0219 14:01:46.070766 4833 generic.go:334] "Generic (PLEG): container finished" podID="a396d626-cea2-42cf-84c5-943b0b85a92b" containerID="6c504a76ab972f68304dd5b4c14ea92d43763cacdcefa21ea2554701acbc166b" exitCode=0 Feb 19 14:01:46 crc kubenswrapper[4833]: I0219 14:01:46.070810 4833 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" event={"ID":"a396d626-cea2-42cf-84c5-943b0b85a92b","Type":"ContainerDied","Data":"6c504a76ab972f68304dd5b4c14ea92d43763cacdcefa21ea2554701acbc166b"} Feb 19 14:01:46 crc kubenswrapper[4833]: I0219 14:01:46.070841 4833 scope.go:117] "RemoveContainer" containerID="591c0b25dd56124bace57f3410df3ed7c130b33cb8b1ea78fd95240b008644a4" Feb 19 14:01:46 crc kubenswrapper[4833]: I0219 14:01:46.071597 4833 scope.go:117] "RemoveContainer" containerID="6c504a76ab972f68304dd5b4c14ea92d43763cacdcefa21ea2554701acbc166b" Feb 19 14:01:46 crc kubenswrapper[4833]: E0219 14:01:46.071889 4833 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-c2lxp_openshift-machine-config-operator(a396d626-cea2-42cf-84c5-943b0b85a92b)\"" pod="openshift-machine-config-operator/machine-config-daemon-c2lxp" podUID="a396d626-cea2-42cf-84c5-943b0b85a92b"